<<

NeuroKit2 Release 0.0.39

Official Documentation

Jun 28, 2020

CONTENTS

1 Introduction 3 1.1 Quick Example...... 3 1.2 Installation...... 4 1.3 Contributing...... 4 1.4 Documentation...... 4 1.5 Citation...... 6 1.6 Physiological Data Preprocessing...... 6 1.7 Physiological Data Analysis...... 13 1.8 Miscellaneous...... 15 1.9 Popularity...... 21 1.10 Notes...... 22

2 Authors 23 2.1 Core team...... 23 2.2 Contributors...... 23

3 Installation 25 3.1 1. Python...... 25 3.2 2. NeuroKit...... 26

4 Get Started 27 4.1 Get familiar with Python in 10 minutes...... 27 4.2 Where to start...... 35

5 Examples 37 5.1 Try the examples in your browser...... 37 5.2 1. Analysis Paradigm...... 37 5.3 2. Biosignal Processing...... 38 5.4 3. Heart rate and heart cycles...... 38 5.5 4. Electrodermal activity...... 39 5.6 5. Respiration rate and respiration cycles...... 39 5.7 6. Muscle activity...... 39 5.8 Simulate Artificial Physiological Signals...... 40 5.9 Customize your Processing Pipeline...... 45 5.10 Event-related Analysis...... 50 5.11 Interval-related Analysis...... 58 5.12 Analyze Electrodermal Activity (EDA)...... 62 5.13 Analyze Respiratory Rate Variability (RRV)...... 64 5.14 ECG-Derived Respiration (EDR) Analysis...... 69 5.15 Extract and Visualize Individual Heartbeats...... 71

i 5.16 How to create epochs...... 85 5.17 Complexity Analysis of Physiological Signals...... 89 5.18 Analyze Electrooculography EOG data (eye blinks, saccades, etc.)...... 94 5.19 Fit a function to a signal...... 97

6 Resources 103 6.1 Recording good quality signals...... 103 6.2 What software for physiological signal processing...... 104 6.3 Additional Resources...... 106

7 Functions 107 7.1 ECG...... 107 7.2 PPG...... 124 7.3 HRV...... 128 7.4 RSP...... 135 7.5 EDA...... 146 7.6 EMG...... 157 7.7 EEG...... 163 7.8 Signal Processing...... 165 7.9 Events...... 187 7.10 Data...... 190 7.11 Epochs...... 191 7.12 Statistics...... 194 7.13 Complexity...... 202 7.14 Miscellaneous...... 234

8 Benchmarks 239 8.1 Benchmarking of ECG Preprocessing Methods...... 239 8.2 References...... 250

9 Datasets 251 9.1 ECG (1000 hz) ...... 251 9.2 ECG - pandas (3000 hz) ...... 251 9.3 Event-related (4 events) ...... 251 9.4 Resting state (5 min) ...... 252 9.5 Resting state (8 min) ...... 252

10 Contributing 255 10.1 Understanding NeuroKit...... 256 10.2 Contributing guide...... 258 10.3 Ideas for first contributions...... 265

Python Module Index 267

Index 269

ii NeuroKit2, Release 0.0.39

Welcome to NeuroKit’s documentation. Here you can find information and learn about Python, NeuroKit, Physiolog- ical Signals and more. You can navigate to the different sections using the left panel. We would recommend checking out the guides and examples, where you can find tutorials and hands-on walkthroughs.

CONTENTS 1 NeuroKit2, Release 0.0.39

2 CONTENTS CHAPTER ONE

INTRODUCTION

The Python Toolbox for Neurophysiological Signal Processing This package is the continuation of NeuroKit 1. It’s a user-friendly package providing easy access to advanced biosig- nal processing routines. Researchers and clinicians without extensive knowledge of programming or biomedical signal processing can analyze physiological data with only two lines of code.

1.1 Quick Example import neurokit2 as nk

# Download example data data= nk.data("bio_eventrelated_100hz")

# Preprocess the data (filter, find peaks, etc.) processed_data, info= nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA

˓→"], sampling_rate=100)

# Compute relevant features results= nk.bio_analyze(processed_data, sampling_rate=100)

And boom your analysis is done

3 NeuroKit2, Release 0.0.39

1.2 Installation

To install NeuroKit2, run this command in your terminal: pip install neurokit2

If you’re not sure how/what to do, be sure to read our installation guide.

1.3 Contributing

NeuroKit2 is a collaborative project with a community of contributors with all levels of development expertise. Thus, if you have some ideas for improvement, new features, or just want to learn Python and do something useful at the same time, do not hesitate and check out the following guides: • Understanding NeuroKit • Contributing guide • Ideas for first contributions

1.4 Documentation

Click on the links above and check out our tutorials:

1.4.1 General

• Get familiar with Python in 10 minutes • Recording good quality signals • What software for physiological signal processing • Install Python and NeuroKit • Included datasets • Additional Resources

4 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.4.2 Examples

• Simulate Artificial Physiological Signals • Customize your Processing Pipeline • Event-related Analysis • Interval-related Analysis • Analyze Electrodermal Activity (EDA) • Analyze Respiratory Rate Variability (RRV) • Extract and Visualize Individual Heartbeats • Locate P, Q, S and T waves in ECG • Complexity Analysis of Physiological Signals • Analyze Electrooculography EOG data • Fit a function to a signal You can try out these examples directly in your browser. Don’t know which tutorial is suited for your case? Follow this flowchart:

1.4. Documentation 5 NeuroKit2, Release 0.0.39

1.5 Citation

nk.cite()

You can cite NeuroKit2 as follows:

- Makowski, D., Pham, T., Lau, Z. J., Brammer, J. C., Lesspinasse, F., Pham, H., Schölzel, C.,& S H Chen, A. (2020). NeuroKit2: A Python Toolbox for

˓→Neurophysiological Signal Processing. Retrieved March 28, 2020, from https://github.com/

˓→neuropsychology/NeuroKit

Full bibtex reference:

@misc{neurokit2, doi ={10.5281/ZENODO.3597887}, url ={https://github.com/neuropsychology/NeuroKit}, author ={Makowski, Dominique and Pham, Tam and Lau, Zen J. and Brammer, Jan C. and

˓→Lespinasse, Fran\c{c}ois and Pham, Hung and Schölzel, Christopher and S H Chen,

˓→Annabel}, title ={NeuroKit2: A Python Toolbox for Neurophysiological Signal Processing}, publisher ={Zenodo}, year ={2020}, }

1.6 Physiological Data Preprocessing

1.6.1 Simulate physiological signals import numpy as np import pandas as pd import neurokit2 as nk

# Generate synthetic signals ecg= nk.ecg_simulate(duration=10, heart_rate=70) ppg= nk.ppg_simulate(duration=10, heart_rate=70) rsp= nk.rsp_simulate(duration=10, respiratory_rate=15) eda= nk.eda_simulate(duration=10, scr_number=3) emg= nk.emg_simulate(duration=10, burst_number=2)

# Visualise biosignals data= pd.DataFrame({"ECG": ecg, "PPG": ppg, "RSP": rsp, "EDA": eda, "EMG": emg}) nk.signal_plot(data, subplots=True)

6 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.6.2 Electrodermal Activity (EDA/GSR)

# Generate 10 seconds of EDA signal (recorded at 250 samples / second) with 2 SCR

˓→peaks eda= nk.eda_simulate(duration=10, sampling_rate=250, scr_number=2, drift=0.01)

# Process it signals, info= nk.eda_process(eda, sampling_rate=250)

# Visualise the processing nk.eda_plot(signals, sampling_rate=250)

1.6. Physiological Data Preprocessing 7 NeuroKit2, Release 0.0.39

1.6.3 Cardiac activity (ECG)

# Generate 15 seconds of ECG signal (recorded at 250 samples / second) ecg= nk.ecg_simulate(duration=15, sampling_rate=250, heart_rate=70)

# Process it signals, info= nk.ecg_process(ecg, sampling_rate=250)

# Visualise the processing nk.ecg_plot(signals, sampling_rate=250)

8 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.6.4 Respiration (RSP)

# Generate one minute of respiratory (RSP) signal (recorded at 250 samples / second) rsp= nk.rsp_simulate(duration=60, sampling_rate=250, respiratory_rate=15)

# Process it signals, info= nk.rsp_process(rsp, sampling_rate=250)

# Visualise the processing nk.rsp_plot(signals, sampling_rate=250)

1.6. Physiological Data Preprocessing 9 NeuroKit2, Release 0.0.39

1.6.5 Electromyography (EMG)

# Generate 10 seconds of EMG signal (recorded at 250 samples / second) emg= nk.emg_simulate(duration=10, sampling_rate=250, burst_number=3)

# Process it signal, info= nk.emg_process(emg, sampling_rate=250)

# Visualise the processing nk.emg_plot(signals, sampling_rate=250)

10 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.6.6 Photoplethysmography (PPG/BVP)

# Generate 15 seconds of PPG signal (recorded at 250 samples / second) ppg= nk.ppg_simulate(duration=15, sampling_rate=250, heart_rate=70)

# Process it signals, info= nk.ppg_process(ppg, sampling_rate=250)

# Visualize the processing nk.ppg_plot(signals, sampling_rate=250)

1.6. Physiological Data Preprocessing 11 NeuroKit2, Release 0.0.39

1.6.7 Electrooculography (EOG)

# Import EOG data eog_signal= nk.data("eog_100hz")

# Process it signals, info= nk.eog_process(eog_signal, sampling_rate=100)

# Plot plot= nk.eog_plot(signals, sampling_rate=100)

12 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.6.8 Electrogastrography (EGG)

Consider helping us develop it!

1.7 Physiological Data Analysis

The analysis of physiological data usually comes in two types, event-related or interval-related.

1.7. Physiological Data Analysis 13 NeuroKit2, Release 0.0.39

1.7.1 Event-related

This type of analysis refers to physiological changes immediately occurring in response to an event. For instance, physiological changes following the presentation of a stimulus (e.g., an emotional stimulus) indicated by the dotted lines in the figure above. In this situation the analysis is epoch-based. An epoch is a short chunk of the physiological signal (usually < 10 seconds), that is locked to a specific stimulus and hence the physiological signals of interest are time-segmented accordingly. This is represented by the orange boxes in the figure above. In this case, using bio_analyze() will compute features like rate changes, peak characteristics and phase characteristics. • Event-related example

1.7.2 Interval-related

This type of analysis refers to the physiological characteristics and features that occur over longer periods of time (from a few seconds to days of activity). Typical use cases are either periods of resting-state, in which the activity is recorded for several minutes while the participant is at rest, or during different conditions in which there is no specific time-locked event (e.g., watching movies, listening to music, engaging in physical activity, etc.). For instance, this type of analysis is used when people want to compare the physiological activity under different intensities of physical exercise, different types of movies, or different intensities of stress. To compare event-related and interval-related analysis, we can refer to the example figure above. For example, a participant might be watching a 20s-long short film where particular stimuli of interest in the movie appears at certain time points (marked by the dotted lines). While event-related analysis pertains to the segments of signals within the orange boxes (to understand the physiological changes pertaining to the appearance of stimuli), interval-related analysis can be applied on the entire 20s duration to investigate how physiology fluctuates in general. In this case, using bio_analyze() will compute features such as rate characteristics (in particular, variability metrices) and peak characteristics.

14 Chapter 1. Introduction NeuroKit2, Release 0.0.39

• Interval-related example

1.8 Miscellaneous

1.8.1 Heart Rate Variability (HRV)

• Compute HRV indices – Time domain: RMSSD, MeanNN, SDNN, SDSD, CVNN etc. – Frequency domain: Spectral power density in various frequency bands (Ultra low/ULF, Very low/VLF, Low/LF, High/HF, Very high/VHF), Ratio of LF to HF power, Normalized LF (LFn) and HF (HFn), Log transformed HF (LnHF). – Nonlinear domain: Spread of RR intervals (SD1, SD2, ratio between SD2 to SD1), Cardiac Sympathetic Index (CSI), Cardial Vagal Index (CVI), Modified CSI, Sample Entropy (SampEn).

# Download data data= nk.data("bio_resting_8min_100hz")

# Find peaks peaks, info= nk.ecg_peaks(data["ECG"], sampling_rate=100)

# Compute HRV indices nk.hrv(peaks, sampling_rate=100, show=True) >>> HRV_RMSSD HRV_MeanNN HRV_SDNN... HRV_CVI HRV_CSI_Modified HRV_SampEn >>>0 69.697983 696.395349 62.135891... 4.829101 592.095372 1.259931

1.8. Miscellaneous 15 NeuroKit2, Release 0.0.39

1.8.2 ECG Delineation

• Delineate the QRS complex of an electrocardiac signal (ECG) including P-peaks, T-peaks, as well as their onsets and offsets.

# Download data ecg_signal= nk.data(dataset="ecg_3000hz")['ECG']

# Extract R-peaks locations _, rpeaks= nk.ecg_peaks(ecg_signal, sampling_rate=3000)

# Delineate signal, waves= nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt",

˓→ show=True, show_type='all')

1.8.3 Signal Processing

• Signal processing functionalities – Filtering: Using different methods. – Detrending: Remove the baseline drift or trend. – Distorting: Add noise and artifacts.

# Generate original signal original= nk.signal_simulate(duration=6, frequency=1)

# Distort the signal (add noise, linear trend, artifacts etc.) distorted= nk.signal_distort(original, noise_amplitude=0.1, (continues on next page)

16 Chapter 1. Introduction NeuroKit2, Release 0.0.39

(continued from previous page) noise_frequency=[5, 10, 20], powerline_amplitude=0.05, artifacts_amplitude=0.3, artifacts_number=3, linear_drift=0.5)

# Clean (filter and detrend) cleaned= nk.signal_detrend(distorted) cleaned= nk.signal_filter(cleaned, lowcut=0.5, highcut=1.5)

# Compare the 3 signals plot= nk.signal_plot([original, distorted, cleaned])

1.8.4 Complexity (Entropy, Dimensions, . . . )

• Optimize complexity parameters (delay tau, dimension m, tolerance r)

# Generate signal signal= nk.signal_simulate(frequency=[1,3], noise=0.01, sampling_rate=100)

# Find optimal time delay, embedding dimension and r parameters= nk.complexity_optimize(signal, show=True)

1.8. Miscellaneous 17 NeuroKit2, Release 0.0.39

• Compute complexity features – Entropy: Sample Entropy (SampEn), Approximate Entropy (ApEn), Fuzzy Entropy (FuzzEn), Multiscale Entropy (MSE), Shannon Entropy (ShEn) – Fractal dimensions: Correlation Dimension D2, . . . – Detrended Fluctuation Analysis nk.entropy_sample(signal) nk.entropy_approximate(signal)

1.8.5 Signal Decomposition

# Create complex signal signal= nk.signal_simulate(duration=10, frequency=1) # High freq signal+=3 * nk.signal_simulate(duration=10, frequency=3) # Higher freq signal+=3 * np.linspace(0,2, len(signal)) # Add baseline and linear trend signal+=2 * nk.signal_simulate(duration=10, frequency=0.1, noise=0) # Non-linear ˓→trend signal+= np.random.normal(0, 0.02, len(signal)) # Add noise

# Decompose signal using Empirical Mode Decomposition (EMD) components= nk.signal_decompose(signal, method='emd') nk.signal_plot(components) # Visualize components

# Recompose merging correlated components recomposed= nk.signal_recompose(components, threshold=0.99) nk.signal_plot(recomposed) # Visualize components

18 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.8.6 Signal Power Spectrum Density (PSD)

# Generate signal with frequencies of 5, 20 and 30 signal= nk.signal_simulate(frequency=5)+ 0.5 *nk.signal_simulate(frequency=20)+ ˓→nk.signal_simulate(frequency=30)

# Find Power Spectrum Density with different methods # Mutlitaper multitaper= nk.signal_psd(signal, method="multitapers", show=False, max_

˓→frequency=100) # Welch welch= nk.signal_psd(signal, method="welch", min_frequency=1, show=False, max_

˓→frequency=100) # Burg burg= nk.signal_psd(signal, method="burg", min_frequency=1, show=False, ar_

˓→order=15, max_frequency=100)

# Visualize the different methods together fig, ax= plt.subplots()

ax.plot(welch["Frequency"], welch["Power"], label="Welch", color="#CFD8DC",

˓→linewidth=2) ax.plot(multitaper["Frequency"], multitaper["Power"], label="Multitaper", color="

˓→#00695C", linewidth=2) ax.plot(burg["Frequency"], burg["Power"], label="Burg", color="#0097AC",

˓→linewidth=2)

ax.set_title("Power Spectrum Density (PSD)") ax.set_yscale('log') ax.set_xlabel("Frequency (Hz)") ax.set_ylabel("PSD (ms^2/Hz)") ax.legend(loc="upper right")

# Plot 3 frequencies of generated signal ax.axvline(5, color="#689F38", linewidth=3, ymax=0.95, linestyle="--") ax.axvline(20, color="#689F38", linewidth=3, ymax=0.95, linestyle="--") ax.axvline(30, color="#689F38", linewidth=3, ymax=0.95, linestyle="--")

1.8. Miscellaneous 19 NeuroKit2, Release 0.0.39

1.8.7 Statistics

• Highest Density Interval (HDI) x= np.random.normal(loc=0, scale=1, size=100000) ci_min, ci_max= nk.hdi(x, ci=0.95, show=True)

20 Chapter 1. Introduction NeuroKit2, Release 0.0.39

1.9 Popularity

1.9. Popularity 21 NeuroKit2, Release 0.0.39

1.10 Notes

The authors do not provide any warranty. If this software causes your keyboard to blow up, your brain to liquify, your toilet to clog or a zombie plague to break loose, the authors CANNOT IN ANY WAY be held responsible.

22 Chapter 1. Introduction CHAPTER TWO

AUTHORS

Hint: Want to be a part of the project? Read how to contribute and join us!

2.1 Core team

• Dominique Makowski (Nanyang Technological University, Singapore) • Tam Pham (Nanyang Technological University, Singapore) • Zen Juen Lau (Nanyang Technological University, Singapore) • Jan C. Brammer (Radboud University, Netherlands) • François Lespinasse (Université de Montréal, Canada)

2.2 Contributors

• Hung Pham (Eureka Robotics, Singapore) • Christopher Schölzel (THM University of Applied Sciences, Germany) • Duy Le (Hubble, Singapore) • Stavros Avramidis • Tiago Rodrigues (IST, Lisbon) • Mitchell Bishop (NINDS, USA) • Robert Richer (FAU Erlangen-Nürnberg, Germany) • Russell Anderson (La Trobe Institute for Molecular Science, Australia) Thanks also to Gansheng Tan, Chuan-Peng Hu, @ucohen, Anthony Gatti, Julien Lamour, @renatosc, Nicolas Beaudoin-Gagnon and @rubinovitz for their contribution in NeuroKit 1. More details here.

23 NeuroKit2, Release 0.0.39

24 Chapter 2. Authors CHAPTER THREE

INSTALLATION

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing (see these guides).

3.1 1. Python

3.1.1 Windows

Winpython

The advantage of Winpython is its portability (i.e., works out of a folder) and default setup (convenient for science). 1. Download a non-zero version of Winpython 2. Install it somewhere (the desktop is a good place). It creates a folder called WPyXX-xxxx 3. In the WPyXX-xxxx folder, open WinPython Command Prompt.exe 4. Run pip install https://github.com/neuropsychology/NeuroKit/zipball/master 5. Start Spyder.exe

Miniconda or Anaconda

The difference between the two is straightforward, miniconda is recommended if you don’t have much storage space and you know what you want to install. Similar to Winpython, Anaconda comes with a base environment, meaning you have basic packages pre-installed. 1. Download and install Miniconda or Anaconda (make sure the Anaconda3 directory is similar to this: C:\ Users\\anaconda3\) 2. Open the Anaconda Prompt (search for it on your computer) 3. Run conda help to see your options

Note: There should be a name in parentheses before your user’s directory, e.g. (base) C:\Users\ . That is the name of your computing environment. By default, you have a base environment. We don’t want that, so create an environment.

4. Run conda env create ; activate it every time you open up conda by running conda activate

25 NeuroKit2, Release 0.0.39

5. Is pip (package installer for python) installed in this env? Prompt Anaconda using pip list it’ll show you all the packages installed in that conda env

3.1.2 Mac OS

1. Install Anaconda 2. Open the terminal 3. Run source activate root 4. Run pip install neurokit2 5. Start Spyder.exe

3.2 2. NeuroKit

If you already have python, you can install NeuroKit by running this command in your terminal: pip install neurokit2

This is the preferred method to install NeuroKit, as it will always install the most stable release. It is also possible to install it directly from github: pip install https://github.com/neuropsychology/neurokit/zipball/master

Hint: Enjoy living on the edge? You can always install the latest dev branch to access some work-in-progress features using pip install https://github.com/neuropsychology/neurokit/zipball/dev

If you don’t have pip installed, this Python installation guide can guide you through the process.

26 Chapter 3. Installation CHAPTER FOUR

GET STARTED

Contents:

4.1 Get familiar with Python in 10 minutes

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing (see these guides).

You have no experience in computer science? You are afraid of code? You feel betrayed because you didn’t expect to do programming in psychology studies? Relax! We got you covered. This tutorial will provide you with all you need to know to dive into the wonderful world of scientific programming. The goal here is not become a programmer, or a software designer, but rather to be able to use the power of program- ming to get scientific results.

4.1.1 Setup

The first thing you will need is to install Python on your computer (we have a tutorial for that). In fact, this includes two things, installing Python (the language), and an environment to be able to use it. For this tutorial, we will assume you have something that looks like Spyder (called an IDE). But you can use jupyter notebooks, or anything else, it doesn’t really matter. There is one important concept to understand here: the difference between the CONSOLE and the EDITOR. The editor is like a cooking table where you prepare your ingredients to make a dish, whereas the console is like the oven, you only open it to put the dish in it and get the result. Most of the code that you will write, you will write it in the editor. It’s basically a text editor (such as notepad), except that it automatically highlights the code. Importantly, you can directly execute a line of code (which is equivalent to copy it and paste it the console). For instance, try writing 1+1 somewhere in the file in the editor pane. Now if select the piece of code you just wrote, and press F9 (or CTRL + ENTER), it will execute it.

27 NeuroKit2, Release 0.0.39

As a result, you should see in the console the order that you gave and, below, its output (which is 2). Now that the distinction between where we write the code and where the output appears is clear, take some time to explore the settings and turn the editor background to BLACK. Why? Because it’s more comfortable for the eyes, but most importantly, because it’s cool .

Congrats, you’ve become a programmer, a wizard of the modern times. You can now save the file (CTRL + S), which will be saved with a .py extension (i.e., a Python script). Try closing everything and reopening this file with the editor.

28 Chapter 4. Get Started NeuroKit2, Release 0.0.39

4.1.2 Variables

The most important concept of programming is variables, which is a fancy name for something that you already know. Do you remember, from your classes, the famous x, this placeholder for any value? Well, x was a variable, i.e., the name refering to some other thing.

Hint: Despite to what I just said, a variable in programming is not equivalent to a variable in statistics, in which it refers to some specific data (for instance, age is variable and contains multiple observations). In programming, a variable is simply the name that we give to some entity, that could be anything.

We can assign a value to a variable using the = sign, for instance: x=2 y=3

Once we execute these two lines, Python will know that x refers to 2, and y to 3. We can now write: print(x+y)

Which will print in the console the correct result.

We can also store the output in a third variable: x=2 y=3 anothervariable=x * y print(anothervariable)

4.1. Get familiar with Python in 10 minutes 29 NeuroKit2, Release 0.0.39

4.1.3 Variables and data types

The next important thing to have in mind is that variables have types. Basic types include integers (numbers without decimals), floats (numbers with decimals), strings (character text) and booleans (True and False). Depending on their type, the variables will not behave in the same way. For example, try:

print(1+2) print("1"+"2")

What happened here? Well, quotations ("I am quoted") are used to represent strings (i.e., text). So in the second line, the numbers that we added were not numbers, but text. And when you add strings together in Python, it concatenates them. One can change the type of a variable with the following:

int(1.0) # transform the input to an integer float(1) # transform the input to a float str(1) # transform the input into text

Also, here I used the hashtag symbol to make comments, i.e., writing stuff that won’t be executed by Python. This is super useful to annotate each line of your code to remember what you do - and why you do it. Types are often the source of many errors as they usually are incompatible between them. For instance, you cannot add a number (int or float) with a character string. For instance, try running 3 + "a", it will throw a TypeError.

4.1.4 Lists and dictionnaries

Two other important types are lists and dictionnaries. You can think of them as containers, as they contain multiple variables. The main difference between them is that in a list, you access the individual elements that it contains by its order (for instance, “give me the third one”), whereas in a dictionary, you access an element by its name (also known as key), for example “give me the element named A”. A list is created using square brackets, and a dictionary using curly brackets. Importantly, in a dictionary, you must specify a name to each element. Here’s what it looks like:

mylist=[1,2,3] mydict={"A":1,"B":2,"C":3}

Keep in mind that there are more types of containers, such as arrays and dataframes, that we will talk about later.

4.1.5 Basic indexing

There’s no point in storing elements in containers if we cannot access them later on. As mentioned earlier, we can access elements from a dictionary by its key within square brackets (note that here the square brackets don’t mean list, just mean within the previous container).

mydict={"A":1,"B":2,"C":3} x= mydict["B"] print(x)

Exercice time! If you have followed this tutorial so far, you should be able to guess what the following code will output:

30 Chapter 4. Get Started NeuroKit2, Release 0.0.39

mydict={"1":0,"2": 42,"x":7} x= str(1+1) y= mydict[x] print(y)

Answer: If you guessed 42, you’re right, congrats! If you guessed 7, you have likely confused the variable named x (which represents 1+1 converted to a character), with the character "x". And if you guessed 0. . . what is wrong with you?

4.1.6 Indexing starts from 0

As mentioned earliers, one can access elements from a list by its order. However, and there is very important to remember (the source of many beginner errors), in Python, the order starts from 0. That means that the first element is the 0th. So if we want the 2nd element of the list, we have to ask for the 1th:

mylist=[1,2,3] x= mylist[1] print(x)

4.1.7 Control flow (if and else)

One important notion in programming is control flow. You want the code to do something different depending on a condition. For instance, if x is lower than 3, print “lower than 3”. In Python, this is done as follows:

x=2 ifx<3: print("lower than 3")

One very important thing to notice is that the if statement corresponds to a “chunk” of code, as signified by the colon :. The chunk is usually written below, and has to be indented (you can ident a line or a chunk of code by pressing the TAB key). What is identation?

this is indentation

This identation must be consistent: usually one level of identation corresponds to 4 spaces. Make sure you respect that throughout your script, as this is very important in Python. If you break the rule, it will throw an error. Try running the following:

if2<3: print("lower than 3")

Finally, if statements can be followed by else statements, which takes care of what happens if the condition is not fullfilled:

x=5 ifx<3: print("lower") else: print("higher")

4.1. Get familiar with Python in 10 minutes 31 NeuroKit2, Release 0.0.39

Again, note the indentation and how the else statement creates a new idented chunk.

4.1.8 For loops

One of the most used concept is loops, and in particular for loops. Loops are chunks of code that will be run several times, until a condition is complete. The for loops create a variable that will successively take all the values of a list (or other iterable types). Let’s look at the code below:

for var in[1,2,3]: print(var)

Here, the for loop creates a variable (that we named var), that will successively take all the values of the provided list.

4.1.9 Functions

Now that you know what a variable is, as well as the purpose of little things like if, else, for, etc., the last most common thing that you will find in code are function calls. In fact, we have already used some of them! Indeed, things like print(), str() and int() were functions. And in fact, you’ve probably encountered them in secondary school mathematics! Remember f(x)? One important thing about functions is that most of the time (not always though), it takes something in, and returns something out. It’s like a factory, you give it some raw material and it outputs some transformed stuff. For instance, let’s say we want to transform a variable containing an integer into a character string:

x=3 x= str(x) print(x)

As we can see, our str() function takes x as an input, and outputs the transformed version, that we can collect using the equal sign = and store in the x variable to replace its content. Another useful function is range(), that creates a sequence of integers, and is often used in combination with for loops. Remember our previous loop:

mylist=[1,2,3] for var in mylist: print(var)

We can re-write it using the range() function, to create a sequence of length 3 (which will be from 0 to 2; remember that Python indexing starts from 0!), and extracting and printing all of the elements in the list:

mylist=[1,2,3] fori in range(3): print(mylist[i])

It’s a bit more complicated than the previous version, it’s true. But that’s the beauty of programming, all things can be done in a near-infinite amount of ways, allowing for your creativity to be expressed. Exercice time! Can you try making a loop so that we add :code: 1 to each element of the list? Answer:

32 Chapter 4. Get Started NeuroKit2, Release 0.0.39

mylist=[1,2,3] fori in range(3): mylist[i]= mylist[i]+1 print(mylist)

If you understand what happened here, in this combination of lists, functions, loops and indexing, great! You are ready to move on.

4.1.10 Packages

Interestingly, Python alone does not include a lot of functions. And that’s also its strength, because it allows to easily use functions developped by other people, that are stored in packages (or modules). A package is a collection of functions that can be downloaded and used in your code. One of the most popular package is numpy (for NUM*rical *PY*thon), including a lot of functions for maths and scientific programming. It is likely that this package is already **installed* on your Python distribution. However, installing a package doesn’t mean you can use it. In order to use a package, you have to import it (load it) in your script, before using it. This usually happens at the top of a Python file, like this:

import numpy

Once you have imported it (you have to run that line), you can use its functions. For instance, let’s use the function to compute square roots included in this package:

x= numpy.sqrt(9) print(x)

You will notice that we have to first write the package name, and then a dot, and then the sqrt() function. Why is it like that? Imagine you load two packages, both having a function named sqrt(). How would the program know which one to use? Here, it knows that it has to look for the sqrt() function in the numpy package. You might think, it’s annoying to write the name of the package everytime, especially if the package name is long. And this is why we sometimes use aliases. For instance, numpy is often loaded under the shortcut np, which makes it shorter to use:

import numpy as np

x= np.sqrt(9) print(x)

4.1.11 Lists vs. vectors (arrays)

Packages can also add new types. One important type avalable through numpy is arrays. In short, an array is a container, similar to a list. However, it can only contain one type of things inside (for instance, only floats, only strings, etc.) and can be multidimensional (imagine a 3D cube made of little cubes containing a value). If an array is one-dimensional (like a list, i.e., a sequence of elements), we can call it a vector. A list can be converted to a vector using the array() function from the numpy package:

mylist=[1,2,3] myvector= np.array(mylist) print(myvector)

4.1. Get familiar with Python in 10 minutes 33 NeuroKit2, Release 0.0.39

In signal processing, vectors are often used instead of lists to store the signal values, because they are more efficient and allow to do some cool stuff with it. For instance, remember our exercice above? In which we had to add :code:`1`to each element of the list? Well using vectors, you can do this directly like this:

myvector= np.array([1,2,3]) myvector= myvector+1 print(myvector)

Indeed, vectors allow for vectorized operations, which means that any operation is propagated on each element of the vector. And that’s very useful for signal processing :)

4.1.12 Conditional indexing

Arrays can also be transformed in arrays of booleans (True or False) using a condition, for instance:

myvector= np.array([1,2,3,2,1]) vector_of_bools= myvector<=2 # <= means inferior OR equal print(vector_of_bools)

This returns a vector of the same length but filled with True (if the condition is respected) or False otherwise. And this new vector can be used as a mask to index and subset the original vector. For instance, we can select all the elements of the array that fulfills this condition:

myvector= np.array([1,2,3,2,1]) mask= myvector<=2 subset= myvector[mask] print(subset)

Additionaly, we can also modify a subset of values on the fly:

myvector= np.array([1,2,3,2,1]) myvector[myvector<=2]=6 print(myvector)

Here we assigned a new value 6 to all elements of the vector that respected the condition (were inferior or equal to 2).

4.1.13 Dataframes

If you’ve followed everything until now, congrats! You’re almost there. The last important type that we are going to see is dataframes. A dataframe is essentially a table with rows and columns. Often, the rows represent different observations and the columns different variables. Dataframes are available in Python through the pandas package, another very used package, usually imported under the shortcut pd. A dataframe can be constructed from a dictionnay: the key will become the variable naùe, and the list or vector associated will become the variable values.

import pandas as pd

# Create variables var1=[1,2,3] var2=[5,6,7]

# Put them in a dict data={"Variable1": var1,"Variable2": var2}

(continues on next page)

34 Chapter 4. Get Started NeuroKit2, Release 0.0.39

(continued from previous page) # Convert this dict to a dataframe data= pd.DataFrame.from_dict(data) print(data)

This creates a dataframe with 3 rows (the observations) and 2 columns (the variables). One can access the variables by their name: print(data["Variable1"])

Note that Python cares about the case: tHiS is not equivalent to ThIs. And pd.DataFrame has to be written with the D and F in capital letters. This is another common source of beginner errors, so make sure you put capital letters at the right place.

4.1.14 Reading data

Now that you know how to create a dataframe in Python, note that you also use pandas to read data from a file (.csv, excel, etc.) by its path: import pandas as pd data= pd.read_excel("C:/Users/Dumbledore/Desktop/myfile.xlsx") # this is an example print(data)

Additionally, this can also read data directly from the internet! Try running the following: import pandas as pd data= pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit/master/

˓→data/bio_eventrelated_100hz.csv") print(data)

4.1.15 Next steps

Now that you know the basis, and that you can distinguish between the different elements of Python code (functions calls, variables, etc.), we recommend that you dive in and try to follow our other examples and tutorials, that will show you some usages of Python to get something out of it.

4.2 Where to start

Hint: This page is under construction. Consider helping us developing it by contributing.

Here are a few examples that a good for starting with NeuroKit. • Event-related Analysis

4.2. Where to start 35 NeuroKit2, Release 0.0.39

36 Chapter 4. Get Started CHAPTER FIVE

EXAMPLES

The notebooks in this repo are meant to illustrate what you can do with NeuroKit. It is supposed to reveal how easy it has become to use cutting-edge methods, and still retain the liberty to change a myriad of parameters. These notebooks are organized in different sections that correspond to NeuroKit’s modules.

5.1 Try the examples in your browser

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing (see this tutorial).

The notebooks in this repo are meant to illustrate what you can do with NeuroKit. It is supposed to reveal how easy it has become to use cutting-edge methods, and still retain the liberty to change a myriad of parameters. These notebooks are organized in different sections that correspond to NeuroKit’s modules. You are free to click on the link below to run everything. . . without having to install anything! There you’ll find a Jupyterlab with notebooks ready to fire up. If you need help figuring out the interface. (The secret is shift+enter).

5.2 1. Analysis Paradigm

Examples dedicated to specific analysis pipelines, such as for event related paradigms and resting state. Ideas of examples to be implemented: > Preprocessing feature signals for machine learning Analysis > EEG + physi- ological activity during resting state > Comparing interval related activity from different “mental states” (e.g. medita- tion, induced emotion vs. neutral)

5.2.1 a) Event-related paradigm eventrelated.ipynb Description This notebook guides you through the initialization of events and epochs creation. It shows you how easy it is to compare measures you’ve extracted from different conditions.

37 NeuroKit2, Release 0.0.39

5.2.2 b) Interval-related paradigm intervalrelated.ipynb Description Breaks down the step to extract characteristics of physiological activity for epochs of a minimum of a couple minutes

5.3 2. Biosignal Processing

Examples dedicated to processing pipelines, and measure extraction of multiple signals at a time. What’s your thing ? How do you do it ? Ideas of examples to be implemented:

> Batch preprocessing of multiple recordings > PPG processing for respiration and temperature > EMG overview(so many muscles to investigate) > add yours...

5.3.1 a) Custom processing pipeline custom.ipynb Description This notebook breaks down the default NeuroKit pipeline used in _process() functions. It guides you in creating your own pipeline with the parameters best suited for your signals.

5.4 3. Heart rate and heart cycles

Examples dedicated to the analysis of ECG, PPG and HRV time series. Are you a fan of the Neurovisceral integration model? How would you infer a cognitive or affective process with HRV ? How do you investigate the asymmetry of cardiac cycles ? Ideas of examples to be implemented:

> Benchmark different peak detection methods > resting state analysis of HRV > Comparing resting state and movie watching > add yours

5.4.1 a) Detecting components of the cardiac cycle ecg_delineation.ipynb Description This notebook illustrate how reliable the peak detection is by analyzing the morphology of each cardiac cycles. It shows you how P-QRS-T components are extracted.

38 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.4.2 b) Looking closer at heart beats heartbeats.ipynb Description This notebook gives hints for a thorough investigation of ECG signals by visualizing individual heart beats, interactively.

5.5 4. Electrodermal activity

Examples dedicated to the analysis of EDA signals. Ideas of examples to be implemented:

> Pain experiments > Temperature > add yours

5.5.1 a) Extracting information in EDA eda.ipynb Description This notebook goes at the heart of the complexity of EDA analysis by break down how Tonic and Phasic components are extracted from the signal.

5.6 5. Respiration rate and respiration cycles

Examples dedicated to the analysis of respiratory signals, i.e. as given by a belt, or eventually, with PPG. Ideas of examples to be implemented:

> Meditation experiments > Stress regulation > add yours

5.6.1 a) Extracting Respiration Rate Variability metrics rrv.ipynb Description This notebook breaks down the extraction of variability metrics done by rsp_rrv()

5.7 6. Muscle activity

Examples dedicated to the analysis of EMG signals. Ideas of examples to be implemented:

> Suggestion and muscle activation > Sleep data analysis >... nothing yet!

5.5. 4. Electrodermal activity 39 NeuroKit2, Release 0.0.39

5.8 Simulate Artificial Physiological Signals

Neurokit’s core signal processing functions surround electrocardiogram (ECG), respiratory (RSP), electrodermal ac- tivity (EDA), and electromyography (EMG) data. Hence, this example shows how to use Neurokit to simulate these physiological signals with customized parametric control.

[1]: import warnings warnings.filterwarnings('ignore')

[2]: # Load NeuroKit and other useful packages import neurokit2 as nk import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline

[3]: plt.rcParams['figure.figsize']=[10,6] # Bigger images

5.8.1 Cardiac Activity (ECG)

With ecg_simulate(), you can generate an artificial ECG signal of a desired length (in this case here, duration=10), noise, and heart rate. As you can see in the plot below, ecg50 has about half the number of heart beats than ecg100, and ecg50 also has more noise in the signal than the latter.

[4]: # Alternate heart rate and noise levels ecg50= nk.ecg_simulate(duration=10, noise=0.05, heart_rate=50) ecg100= nk.ecg_simulate(duration=10, noise=0.01, heart_rate=100)

# Visualize ecg_df= pd.DataFrame({"ECG_100": ecg100, "ECG_50": ecg50})

nk.signal_plot(ecg_df, subplots=True)

40 Chapter 5. Examples NeuroKit2, Release 0.0.39

You can also choose to generate the default, simple simulation based on Daubechies wavelets, which roughly approx- imates one cardiac cycle, or a more complex one by specifiying method="ecgsyn".

[5]: # Alternate methods ecg_sim= nk.ecg_simulate(duration=10, method="simple") ecg_com= nk.ecg_simulate(duration=10, method="ecgsyn")

# Visualize methods= pd.DataFrame({"ECG_Simple": ecg_sim, "ECG_Complex": ecg_com}) nk.signal_plot(methods, subplots=True)

5.8. Simulate Artificial Physiological Signals 41 NeuroKit2, Release 0.0.39

5.8.2 Respiration (RSP)

To simulate a synthetic respiratory signal, you can use rsp_simulate() and choose a specific duration and breath- ing rate. In this example below, you can see that rsp7 has a lower breathing rate than rsp15. You can also decide which model you want to generate the signal. The simple rsp15 signal incorporates method = "sinusoidal" which approximates a respiratory cycle based on the trigonometric sine wave. On the other hand, the complex rsp15 signal specifies method = "breathmetrics" which uses a more advanced model by interpolating inhalation and exhalation pauses between each respiratory cycle.

[6]: # Simulate rsp15_sim= nk.rsp_simulate(duration=20, respiratory_rate=15, method="sinusoidal") rsp15_com= nk.rsp_simulate(duration=20, respiratory_rate=15, method="breathmetrics") rsp7= nk.rsp_simulate(duration=20, respiratory_rate=7, method="breathmetrics")

# Visualize respiration rate rsp_df= pd.DataFrame({"RSP7": rsp7, "RSP15_simple": rsp15_sim, "RSP15_complex": rsp15_com}) nk.signal_plot(rsp_df, subplots=True)

42 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.8.3 Electromyography (EMG)

Now, we come to generating an artificial EMG signal using emg_simulate(). Here, you can specify the number of bursts of muscular activity (n_bursts) in the signal as well as the duration of the bursts (duration_bursts). As you can see the active muscle periods in EMG2_Longer are greater in duration than that of EMG2, and EMG5 contains more bursts than the former two.

[7]: # Simulate emg2= nk.emg_simulate(duration=10, burst_number=2, burst_duration=1.0) emg2_long= nk.emg_simulate(duration=10, burst_number=2, burst_duration=1.5) emg5= nk.emg_simulate(duration=10, burst_number=5, burst_duration=1.0)

# Visualize emg_df= pd.DataFrame({"EMG2": emg2, "EMG2_Longer": emg2_long, "EMG5": emg5}) nk.signal_plot(emg_df,subplots=True)

5.8. Simulate Artificial Physiological Signals 43 NeuroKit2, Release 0.0.39

5.8.4 Electrodermal Activity (EDA)

Finally, eda_simulate() can be used to generate a synthetic EDA signal of a given duration, specifying the number of skin conductance responses or activity ‘peaks’ (n_scr) and the drift of the signal. You can also modify the noise level of the signal.

[8]: # Simulate eda1= nk.eda_simulate(duration=10, scr_number=1, drift=-0.01, noise=0.05) eda3= nk.eda_simulate(duration=10, scr_number=3, drift=-0.01, noise=0.01) eda3_long= nk.eda_simulate(duration=10, scr_number=3, drift=-0.1, noise=0.01)

# Visualize eda_df= pd.DataFrame({"EDA1": eda1, "EDA3": eda3, "EDA3_Longer": eda3_long}) nk.signal_plot(eda_df, subplots=True)

44 Chapter 5. Examples NeuroKit2, Release 0.0.39

[ ]:

5.9 Customize your Processing Pipeline

While NeuroKit is designed to be beginner-friendly, experts who desire to have more control over their own processing pipeline are also offered the possibility to tune functions to their specific usage. This example shows how to use NeuroKit to customize your own processing pipeline for advanced users taking ECG processing as an example.

[1]: # Load NeuroKit and other useful packages import neurokit2 as nk import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline %matplotlib notebook

[2]: plt.rcParams['figure.figsize']=[15,9] # Bigger images

5.9. Customize your Processing Pipeline 45 NeuroKit2, Release 0.0.39

5.9.1 The Default NeuroKit processing pipeline

NeuroKit provides a very useful set of functions, *_process() (e.g. ecg_process(), eda_process(), emg_process(), . . . ), which are all-in-one functions that cleans, preprocesses and processes the signals. It in- cludes good and sensible defaults that should be suited for most of users and typical use-cases. That being said, in some cases, you might want to have more control over the processing pipeline. This is how ecg_process() is typically used:

[3]: # Simulate ecg signal (you can use your own one) ecg= nk.ecg_simulate(duration=15, sampling_rate=1000, heart_rate=80)

# Default processing pipeline signals, info= nk.ecg_process(ecg, sampling_rate=1000)

# Visualize plot= nk.ecg_plot(signals)

46 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.9.2 Building your own process() function

Now, if you look at the code of `ecg_process() `__ (see here for how to explore the code), you can see that it is in fact very simple. It uses what can be referred to as “mid-level functions”, such as ecg_clean(), ecg_peaks(), ecg_rate() etc. This means that you can basically re-create the ecg_process() function very easily by calling these mid-level functions:

[22]: # Define a new function def my_processing(ecg_signal): # Do processing ecg_cleaned= nk.ecg_clean(ecg, sampling_rate=1000) instant_peaks, rpeaks,= nk.ecg_peaks(ecg_cleaned, sampling_rate=1000) rate= nk.ecg_rate(rpeaks, sampling_rate=1000, desired_length=len(ecg_cleaned)) quality= nk.ecg_quality(ecg_cleaned, sampling_rate=1000)

# Prepare output signals= pd.DataFrame({"ECG_Raw": ecg_signal, "ECG_Clean": ecg_cleaned, "ECG_Rate": rate, "ECG_Quality": quality}) signals= pd.concat([signals, instant_peaks], axis=1) info= rpeaks

return signals, info

You can now use this function as you would do with ecg_process().

[23]: # Process the signal using previously defined function signals, info= my_processing(ecg)

# Visualize plot= nk.ecg_plot(signals)

5.9. Customize your Processing Pipeline 47 NeuroKit2, Release 0.0.39

5.9.3 Changing the processing parameters

Now, you might want to ask, why would you re-create the processing function? Well, it allows you to change the parameters of the inside as you please. Let’s say you want to use a specific cleaning method. First, let’s look at the documentation for ``ecg_clean()` `__, you can see that they are several different methods for cleaning which can be specified. The default is the Neurokit method, however depending on the quality of your signal (and several other factors), other methods may be more appropriate. It is up to you to make this decision. You can now change the methods as you please for each function in your custom processing function that you have written above:

[24]: # Define a new function def my_processing(ecg_signal): # Do processing ecg_cleaned= nk.ecg_clean(ecg, sampling_rate=1000, method="engzeemod2012") instant_peaks, rpeaks,= nk.ecg_peaks(ecg_cleaned, sampling_rate=1000) rate= nk.ecg_rate(rpeaks, sampling_rate=1000, desired_length=len(ecg_cleaned)) quality= nk.ecg_quality(ecg_cleaned, sampling_rate=1000)

# Prepare output signals= pd.DataFrame({"ECG_Raw": ecg_signal, "ECG_Clean": ecg_cleaned, "ECG_Rate": rate, "ECG_Quality": quality}) (continues on next page)

48 Chapter 5. Examples NeuroKit2, Release 0.0.39

(continued from previous page) signals= pd.concat([signals, instant_peaks], axis=1) info= rpeaks

return signals, info

Similarly, you can select a different method for the peak detection.

5.9.4 Customize even more!

It is possible that none of these methods suit your needs, or that you want to test a new method. Rejoice yourself, as NeuroKit allows you to do that by providing what can be referred to as “low-level” functions. For instance, you can rewrite the cleaning procedure by using the signal processsing tools offered by NeuroKit:

[25]: def my_cleaning(ecg_signal, sampling_rate): detrended= nk.signal_detrend(ecg_signal, order=1) cleaned= nk.signal_filter(detrended, sampling_rate=sampling_rate, lowcut=2,

˓→highcut=9, method='butterworth') return cleaned

You can use this function inside your custom processing written above:

[26]: # Define a new function def my_processing(ecg_signal): # Do processing ecg_cleaned= my_cleaning(ecg, sampling_rate=1000) instant_peaks, rpeaks,= nk.ecg_peaks(ecg_cleaned, sampling_rate=1000) rate= nk.ecg_rate(rpeaks, sampling_rate=1000, desired_length=len(ecg_cleaned)) quality= nk.ecg_quality(ecg_cleaned, sampling_rate=1000)

# Prepare output signals= pd.DataFrame({"ECG_Raw": ecg_signal, "ECG_Clean": ecg_cleaned, "ECG_Rate": rate, "ECG_Quality": quality})

signals= pd.concat([signals, instant_peaks], axis=1) info= rpeaks

return signals, info

Congrats, you have created your own processing pipeline! Let’s see how it performs:

[27]: signals, info= my_processing(ecg) plot= nk.ecg_plot(signals)

5.9. Customize your Processing Pipeline 49 NeuroKit2, Release 0.0.39

This doesn’t look bad :) Can you do better?

5.10 Event-related Analysis

This example shows how to use Neurokit to extract epochs from data based on events localisation and its corresponding physiological signals. That way, you can compare experimental conditions with one another.

[1]: # Load NeuroKit and other useful packages import neurokit2 as nk import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline

[2]: plt.rcParams['figure.figsize']=[15,5] # Bigger images plt.rcParams['font.size']= 14

50 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.10.1 The Dataset

Use the nk.data() function to load the dataset located on Neurokit data folder. It contains 2.5 minutes of biosignals recorded at a frequency of 100Hz (2.5 x 60 x 100 = 15000 data points). Biosignals : ECG, RSP, EDA + Photosensor (event signal)

[3]: # Get data data= nk.data("bio_eventrelated_100hz")

This is the data from 1 participant to whom was presented 4 images (emotional stimuli, IAPS-like emotional faces), which we will refer to as events. Importantly, the images were marked by a small black rectangle on the screen, which led to the photosensor signal to go down (and then up again after the image). This is what will allow us to retrieve the location of these events. They were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each picture was presented for 3 seconds. The following list is the condition order.

[4]: condition_list=["Negative","Neutral","Neutral","Negative"]

5.10.2 Find Events

These events can be localized and extracted using events_find(). Note that you should also specify whether to select events that are higher or below the threshold using the ``thresh- old_keep`` argument.

[5]: # Find events events= nk.events_find(data["Photosensor"], threshold_keep='below', event_

˓→conditions=condition_list) events [5]: {'onset': array([ 1024, 4957, 9224, 12984]), 'duration': array([300, 300, 300, 300]), 'label': array(['1', '2', '3', '4'], dtype='

As we can see, events_find() returns a dict containing onsets and durations for each corresponding event, based on the label for event identifiers and each event condition. Each event here lasts for 300 data points (equivalent to 3 seconds sampled at 100Hz).

[6]: # Plot the location of event with the signals plot= nk.events_plot(events, data)

5.10. Event-related Analysis 51 NeuroKit2, Release 0.0.39

The output of events_plot() shows the corresponding events in the signal, with the blue dashed line representing a Negative event and red dashed line representing a Neutral event.

5.10.3 Process the Signals

Now that we have the events location, we can go ahead and process the data. Biosignals processing can be done quite easily using NeuroKit with the bio_process() function. Simply provide the appropriate biosignal channels and additional channels that you want to keep (for example, the photosensor), and bio_process() will take care of the rest. It will return a dataframe containing processed signals and a dictionary containing useful information.

[7]: # Process the signal df, info= nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_

˓→rate=100)

# Visualize df.plot() # theres is a ton of features now, but

˓→not in epochs [7]:

52 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.10.4 Create Epochs

We now have to transform this dataframe into epochs, i.e. segments (chunks) of data around the events using epochs_create().

1. We want it to start *1 second before the event onset*

2. and end *6 seconds* afterwards

These are passed into the epochs_start and epochs_end arguments, respectively. Our epochs will then cover the region from -1 s to +6 s (i.e., 700 data points since the signal is sampled at 100Hz).

[8]: # Build and plot epochs epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_

˓→end=6)

Let’s plot some of the signals of the first epoch (and transform them to the same scale for visualization purposes).

5.10. Event-related Analysis 53 NeuroKit2, Release 0.0.39

[9]: fori, epoch in enumerate(epochs): epoch= epochs[epoch] # iterate epochs",

epoch= epoch[['ECG_Clean','ECG_Rate','RSP_Rate', 'RSP_Phase','EDA_Phasic','EDA_Tonic']] # Select relevant columns

˓→",

title= events['condition'][i] # get title from condition list",

nk.standardize(epoch).plot(title=title, legend=True) # Plot scaled signals"

54 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.10.5 Extract Event Related Features

With these segments, we are able to compare how the physiological signals vary across the different events. We do this by: 1. Iterating through our object epochs 2. Storing the mean value of :math:`X` feature of each condition in a new dictionary 3. Saving the results in a readable format We can call them epochs-dictionary, the mean-dictionary and our results-dataframe.

[10]: df={} # Initialize an empty dict,

for epoch_index in epochs: df[epoch_index]={} # then Initialize an empty dict inside of it with the

˓→iterative

# Save a temp var with dictionary called in epochs-dictionary epoch= epochs[epoch_index]

# We want its features:

# Feature 1 ECG (continues on next page)

5.10. Event-related Analysis 55 NeuroKit2, Release 0.0.39

(continued from previous page) ecg_baseline= epoch["ECG_Rate"].loc[-100:0].mean() # Baseline ecg_mean= epoch["ECG_Rate"].loc[0:400].mean() # Mean heart rate in the 0-4

˓→seconds # Store ECG in df df[epoch_index]["ECG_Rate"]= ecg_mean- ecg_baseline # Correct for baseline

# Feature 2 EDA - SCR scr_max= epoch["SCR_Amplitude"].loc[0:600].max() # Maximum SCR peak # If no SCR, consider the magnitude, i.e. that the value is 0 if np.isnan(scr_max): scr_max=0 # Store SCR in df df[epoch_index]["SCR_Magnitude"]= scr_max

# Feature 3 RSP rsp_baseline= epoch["RSP_Rate"].loc[-100:0].mean() # Baseline rsp_rate= epoch["RSP_Rate"].loc[0:600].mean() # Store RSP in df df[epoch_index]["RSP_Rate"]= rsp_rate- rsp_baseline # Correct for baseline

df= pd.DataFrame.from_dict(df, orient="index") # Convert to a dataframe df["Condition"]= condition_list # Add the conditions df # Print DataFrame [10]: ECG_Rate SCR_Magnitude RSP_Rate Condition 1 -4.286137 3.114808 2.729480 Negative 2 -5.387987 0.000000 2.094437 Neutral 3 -1.400696 0.000000 -0.062720 Neutral 4 -3.804883 1.675922 -1.674218 Negative

5.10.6 Plot Event Related Features

You can now plot and compare how these features differ according to the event of interest.

[11]: sns.boxplot(x="Condition",y="ECG_Rate", data=df) [11]:

56 Chapter 5. Examples NeuroKit2, Release 0.0.39

[12]: sns.boxplot(x="Condition",y="RSP_Rate", data=df) [12]:

[13]: sns.boxplot(x="Condition",y="SCR_Magnitude", data=df) [13]:

Then interpret : As we can see, there seems to be a difference between the negative and the neutral pictures. Neg- ative stimuli, as compared to neutral stimuli, were related to a stronger cardiac deceleration (i.e., higher heart rate variability), an accelerated breathing rate, and higher SCR magnitude.

5.10.7 Important remarks:

You can’t break anything if you’re on Binder, so have fun. Keep in mind that this is for illustration purposes only. Data size limits on Github force us to downsample and have only one participant (sample rate would have to be >250 Hz, and you can’t do stats with 4 observations in 1 subjects). We invite you to read on reporting guidelines for biosignal measures. For ECG-PPG/HRV : Quintana, Alvarez & Heathers, 2016 - GRAPH

5.10. Event-related Analysis 57 NeuroKit2, Release 0.0.39

5.11 Interval-related Analysis

This example shows how to use Neurokit to analyze longer periods of data (i.e., greater than 10 seconds) such as resting state data. If you are looking to perform event-related analysis on epochs, you can refer to this Neurokit example here.

[1]: # Load NeuroKit and other useful packages import neurokit2 as nk import pandas as pd import matplotlib.pyplot as plt %matplotlib inline

[2]: plt.rcParams['figure.figsize']=[15,9] # Bigger images plt.rcParams['font.size']= 13

5.11.1 The Dataset

First, download the dataset located on the GitHub repository. It contains 5 minutes of physiological signals recorded at a frequency of 100Hz (5 x 60 x 100 = 30000 data points). It contains the following signals : ECG, PPG, RSP

[3]: # Get data data= pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit/master/

˓→data/bio_resting_5min_100hz.csv")

This is the resting state data from 1 participant who was asked to close his/her eyes for 8 minutes, trying not to think of anything as well as not to fall asleep.

5.11.2 Process the Signals

In this analysis here, we will focus on extracting ECG and RSP features. To process the respective physiological signals, you can use ecg_process() and rsp_process(). You can then then visualize these signals using ecg_plot() and rsp_plot(). For the purposes of these example, we will select just 3000 datapoints (or 30s) to visualize. Note: Do remember to specify the correct sampling_rate (in this case, to 100Hz) in which the signals were generated, in all the relevant functions.

[4]: # Process ecg ecg_signals, info= nk.ecg_process(data["ECG"], sampling_rate=100) plot= nk.ecg_plot(ecg_signals[:3000], sampling_rate=100)

58 Chapter 5. Examples NeuroKit2, Release 0.0.39

[5]: # Process rsp rsp_signals, info= nk.rsp_process(data["RSP"], sampling_rate=100) plot= nk.rsp_plot(rsp_signals[:3000], sampling_rate=100)

5.11. Interval-related Analysis 59 NeuroKit2, Release 0.0.39

5.11.3 Extract Features

Now that we have the processed signals, we can now perform the analysis using ecg_intervalrelated() and rsp_intervalrelated(). Simply provide the processed dataframe and these functions will return a dataframe of the features pertaining to the specific signal. These features will be quite different from event-related features (See Event-related analysis example) as these signals were generated over a longer period of time. Hence, apart from the mean signal rate, variability metrices pertaining to heart rate variability (HRV) and respiratory rate variability (RRV) are also extracted here.

[6]: nk.ecg_intervalrelated(ecg_signals) [6]: ECG_Rate_Mean HRV_RMSSD HRV_MeanNN HRV_SDNN HRV_SDSD HRV_CVNN \ 0 86.394304 3.883777 69.475638 4.903604 3.888256 0.07058

HRV_CVSD HRV_MedianNN HRV_MadNN HRV_MCVNN ... HRV_LFn HRV_HFn \ 0 0.055901 69.0 4.4478 0.064461 ... NaN 0.885628

HRV_LnHF HRV_SD1 HRV_SD2 HRV_SD2SD1 HRV_CSI HRV_CVI \ 0 1.137604 2.749412 4.762122 1.732051 1.732051 2.32116

HRV_CSI_Modified HRV_SampEn 0 32.992946 1.978637

[1 rows x 30 columns]

60 Chapter 5. Examples NeuroKit2, Release 0.0.39

[7]: nk.rsp_intervalrelated(rsp_signals) [7]: RSP_Rate_Mean RSP_Amplitude_Mean RRV_SDBB RRV_RMSSD RRV_SDSD \ 0 15.744233 0.397941 100.662988 119.691326 120.504248

RRV_VLF RRV_LF RRV_HF RRV_LFHF RRV_LFn RRV_HFn RRV_SD1 \ 0 NaN NaN 793512.619536 NaN NaN NaN 85.209371

RRV_SD2 RRV_SD2SD1 RRV_ApEn RRV_SampEn RRV_DFA_2 0 114.041384 1.338367 0.717675 1.504077 0.618535

5.11.4 Optional: Segmenting the Data

If you want to segment your data for analysis, such as analyzing two separate portions of your resting state data, you can simply do so by splitting the ecg_signals dataframe into epochs using epochs_create(). Using this example dataset, let’s say you want to analyze the first half and the second half of the ECG data. This means that each halved data would last for 60 x 2.5s = 150s. In this function, we would also specify the onset of events to be at the 0th (for the first half of the data) and the 15000th datapoint (for the second half of the data), since there are 30000 data points in total.

[8]: # Half the data epochs= nk.epochs_create(ecg_signals, events=[0, 15000], sampling_rate=100, epochs_

˓→start=0, epochs_end=150)

This returns a dictionary of 2 processed ECG dataframes, which you can then enter into ecg_intervalrelated().

[9]: # Analyze nk.ecg_intervalrelated(epochs) [9]: ECG_Rate_Mean HRV_RMSSD HRV_MeanNN HRV_SDNN HRV_SDSD HRV_CVNN \ 1 86.377089 3.638450 69.497674 5.167181 3.645389 0.074350 2 86.411519 4.032578 69.460465 4.648090 4.042033 0.066917

HRV_CVSD HRV_MedianNN HRV_MadNN HRV_MCVNN ... HRV_LFn HRV_HFn \ 1 0.052354 69.0 4.4478 0.064461 ... NaN NaN 2 0.058056 69.0 4.4478 0.064461 ... NaN NaN

HRV_LnHF HRV_SD1 HRV_SD2 HRV_SD2SD1 HRV_CSI HRV_CVI \ 1 NaN 2.577680 4.464672 1.732051 1.732051 2.265138 2 NaN 2.858149 4.950459 1.732051 1.732051 2.354850

HRV_CSI_Modified HRV_SampEn 1 30.932155 1.252763 2 34.297785 1.881786

[2 rows x 30 columns]

This then returns a dataframe of the analyzed features, with the rows representing the respective segmented signals. Try doing this with your own signals!

5.11. Interval-related Analysis 61 NeuroKit2, Release 0.0.39

5.12 Analyze Electrodermal Activity (EDA)

This example shows how to use NeuroKit2 to extract the features from Electrodermal Activity (EDA) .

[6]: # Load the NeuroKit package and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt %matplotlib inline

[7]: plt.rcParams['figure.figsize']=[15,5] # Bigger images

5.12.1 Extract the cleaned EDA signal

In this example, we will use a simulated EDA signal. However, you can use any signal you have generated (for instance, extracted from the dataframe using read_acqknowledge().

[9]: # Simulate 10 seconds of EDA Signal (recorded at 250 samples / second) eda_signal= nk.eda_simulate(duration=10, sampling_rate=250, scr_number=3, drift=0.01)

Once you have a raw EDA signal in the shape of a vector (i.e., a one-dimensional array), or a list, you can use eda_process() to process it.

[10]: # Process the raw EDA signal signals, info= nk.eda_process(eda_signal, sampling_rate=250)

Note: It is critical that you specify the correct sampling rate of your signal throughout many processing functions, as this allows NeuroKit to have a time reference. This function outputs two elements, a dataframe containing the different signals (e.g., the raw signal, clean signal, SCR samples marking the different features etc.), and a dictionary containing information about the Skin Conductance Response (SCR) peaks (e.g., onsets, peak amplitude etc.).

5.12.2 Locate Skin Conductance Response (SCR) features

The processing function does two important things for our purpose: Firstly, it cleans the signal. Secondly, it detects the location of 1) peak onsets, 2) peak amplitude, and 3) half-recovery time. Let’s extract these from the output.

[11]: # Extract clean EDA and SCR features cleaned= signals["EDA_Clean"] features=[info["SCR_Onsets"], info["SCR_Peaks"], info["SCR_Recovery"]]

We can now visualize the location of the peak onsets, the peak amplitude, as well as the half-recovery time points in the cleaned EDA signal, respectively marked by the red dashed line, blue dashed line, and orange dashed line.

[12]: # Visualize SCR features in cleaned EDA signal plot= nk.events_plot(features, cleaned, color=['red','blue','orange'])

62 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.12.3 Decompose EDA into Phasic and Tonic components

We can also decompose the EDA signal into its phasic and tonic components, or more specifically, the Phasic Skin Conductance Response (SCR) and the Tonic Skin Conductance Level (SCL) respectively. The SCR represents the stimulus-dependent fast changing signal whereas the SCL is slow-changing and continuous. Separating these two signals helps to provide a more accurate estimation of the true SCR amplitude.

[13]: # Filter phasic and tonic components data= nk.eda_phasic(nk.standardize(eda_signal), sampling_rate=250)

Note: here westandardizedthe raw EDA signal before the decomposition, which can be useful in the presence of high inter-individual variations. We can now add the raw signal to the dataframe containing the two signals, and plot them!

[14]: data["EDA_Raw"]= eda_signal # Add raw signal data.plot() [14]:

5.12. Analyze Electrodermal Activity (EDA) 63 NeuroKit2, Release 0.0.39

5.12.4 Quick Plot

You can obtain all of these features by using the eda_plot() function on the dataframe of processed EDA.

[15]: # Plot EDA signal plot= nk.eda_plot(signals)

5.13 Analyze Respiratory Rate Variability (RRV)

Respiratory Rate Variability (RRV), or variations in respiratory rhythm, are crucial indices of general health and respiratory complications. This example shows how to use NeuroKit to perform RRV analysis.

5.13.1 Download Data and Extract Relevant Signals

[1]: # Load NeuroKit and other useful packages import neurokit2 as nk import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline

[2]: plt.rcParams['figure.figsize']= 15,5 # Bigger images

In this example, we will download a dataset that contains electrocardiogram, respiratory, and electrodermal activity signals, and extract only the respiratory (RSP) signal.

[3]: # Get data data= pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit/master/

˓→data/bio_eventrelated_100hz.csv") rsp= data["RSP"] nk.signal_plot(rsp, sampling_rate=100) # Visualize

64 Chapter 5. Examples NeuroKit2, Release 0.0.39

You now have the raw RSP signal in the shape of a vector (i.e., a one-dimensional array). You can then clean it using rsp_clean() and extract the inhalation peaks of the signal using rsp_peaks(). This will output 1) a dataframe indicating the occurrences of inhalation peaks and exhalation troughs (“1” marked in a list of zeros), and 2) a dictionary showing the samples of peaks and troughs. Note: As the dataset has a frequency of 100Hz, make sure the ``sampling_rate`` is also set to 100Hz. It is critical that you specify the correct sampling rate of your signal throughout all the processing functions.

[4]: # Clean signal cleaned= nk.rsp_clean(rsp, sampling_rate=100)

# Extract peaks df, peaks_dict= nk.rsp_peaks(cleaned) info= nk.rsp_fixpeaks(peaks_dict) formatted= nk.signal_formatpeaks(info, desired_length=len(cleaned),peak_indices=info[

˓→"RSP_Peaks"])

[5]: nk.signal_plot(pd.DataFrame({"RSP_Raw": rsp,"RSP_Clean": cleaned}), sampling_

˓→rate=100, subplots=True)

[6]: candidate_peaks= nk.events_plot(peaks_dict['RSP_Peaks'], cleaned)

5.13. Analyze Respiratory Rate Variability (RRV) 65 NeuroKit2, Release 0.0.39

[7]: fixed_peaks= nk.events_plot(info['RSP_Peaks'], cleaned)

[8]: # Extract rate rsp_rate= nk.rsp_rate(formatted, desired_length=None, sampling_rate=100) # Note: You

˓→can also replace info with peaks dictionary

# Visualize nk.signal_plot(rsp_rate, sampling_rate=100) plt.ylabel('BPM') [8]: Text(0, 0.5, 'BPM')

66 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.13.2 Analyse RRV

Now that we have extracted the respiratory rate signal and the peaks dictionary, you can then input these into rsp_rrv(). This outputs a variety of RRV indices including time domain, frequency domain, and nonlinear fea- tures. Examples of time domain features include RMSSD (root-mean-squared standard deviation) or SDBB (standard deviation of the breath-to-breath intervals). Power spectral analyses (e.g., LF, HF, LFHF) and entropy measures (e.g., sample entropy, SampEn where smaller values indicate that respiratory rate is regular and predictable) are also exam- ples of frequency domain and nonlinear features respectively. A Poincaré plot is also shown when setting show=True, plotting each breath-to-breath interval against the next successive one. It shows the distribution of successive respiratory rates.

[9]: rrv= nk.rsp_rrv(rsp_rate, info, sampling_rate=100, show=True) rrv Neurokit warning: signal_psd(): The duration of recording is too short to support a

˓→sufficiently long window for high frequency resolution. Consider using a longer

˓→recording or increasing the `min_frequency` [9]: RRV_SDBB RRV_RMSSD RRV_SDSD RRV_VLF RRV_LF RRV_HF \ 0 1030.411296 1269.625397 1286.590811 0.0 203.846501 2000.465169

RRV_LFHF RRV_LFn RRV_HFn RRV_SD1 RRV_SD2 RRV_SD2SD1 RRV_ApEn \ 0 0.1019 0.092476 0.907524 909.757087 1138.34833 1.251266 0.496939

RRV_SampEn RRV_DFA 0 0.693147 0.755783

5.13. Analyze Respiratory Rate Variability (RRV) 67 NeuroKit2, Release 0.0.39

This is a simple visualization tool for short-term (SD1) and long-term variability (SD2) in respiratory rhythm.

68 Chapter 5. Examples NeuroKit2, Release 0.0.39

See documentation for full reference

RRV method taken from : Soni et al. 2019

5.14 ECG-Derived Respiration (EDR) Analysis

ECG-derived respiration (EDR) is the extraction of respiratory information from ECG and is a noninvasive method to monitor respiration activity under instances when respiratory signals are not recorded. In clinical settings, this presents convenience as it allows the monitoring of cardiac and respiratory signals simultaneously from a recorded ECG signal. This example shows how to use Neurokit to perform EDR analysis.

[1]: # Load NeuroKit and other useful packages import neurokit2 as nk import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline

[2]: plt.rcParams['figure.figsize']=[15,5] # Bigger images

5.14.1 Download ECG Data

In this example, we will download a dataset containing an ECG signal sampled at 1000 Hz.

[3]: # Get data ecg= np.array(pd.read_csv("https://raw.githubusercontent.com/neuropsychology/

˓→NeuroKit/dev/data/ecg_1000hz.csv"))[:,1]

# Visualize signal nk.signal_plot(ecg)

5.14. ECG-Derived Respiration (EDR) Analysis 69 NeuroKit2, Release 0.0.39

5.14.2 Extraction of ECG Features

Now you can extract the R peaks of the signal using ecg_peaks() and compute the heart period using ecg_rate(). Note: As the dataset has a frequency of 1000Hz, make sure the ``sampling_rate`` is also set to 1000Hz. It is critical that you specify the correct sampling rate of your signal throughout all the processing functions.

[4]: # Extract peaks rpeaks, info= nk.ecg_peaks(ecg, sampling_rate=1000)

# Compute rate ecg_rate= nk.ecg_rate(rpeaks, sampling_rate=1000, desired_length=len(ecg))

5.14.3 Analyse EDR

Now that we have an array of the heart period, we can then input this into ecg_rsp() to extract the EDR. Default method is by Van Gent et al. 2019 ; see the full reference in documentation (run : nk.ecg_rsp?)

[5]: edr= nk.ecg_rsp(ecg_rate, sampling_rate=1000)

# Visual comparison nk.signal_plot(edr)

The function ecg_rsp() incorporates different methods to compute EDR. For a visual comparison of the different methods, we can create a dataframe of EDR columns each of which are produced by different methods, and then plot it, like so:

[16]: edr_df= pd.DataFrame({"Van Gent et al.": nk.ecg_rsp(ecg_rate, sampling_rate=1000), "Charlton et al.": nk.ecg_rsp(ecg_rate, sampling_rate=1000, method=

˓→"charlton2016"), "Soni et al.": nk.ecg_rsp(ecg_rate, sampling_rate=1000, method="soni2019"), "Sarkar et al.": nk.ecg_rsp(ecg_rate, sampling_rate=1000, method=

˓→"sarkar2015")})

[17]: nk.signal_plot(edr_df, subplots=True)

70 Chapter 5. Examples NeuroKit2, Release 0.0.39

[ ]:

5.15 Extract and Visualize Individual Heartbeats

This example shows how to use NeuroKit to extract and visualize the QRS complexes (individual heartbeats) from an electrocardiogram (ECG).

[17]: # Load NeuroKit and other useful packages import neurokit2 as nk import numpy as np import matplotlib.pyplot as plt %matplotlib inline

[18]: plt.rcParams['figure.figsize']=[15,9] # Bigger images

5.15.1 Extract the cleaned ECG signal

In this example, we will use a simulated ECG signal. However, you can use any of your signal (for instance, extracted from the dataframe using the read_acqknowledge().

[19]: # Simulate 30 seconds of ECG Signal (recorded at 250 samples / second) ecg_signal= nk.ecg_simulate(duration=30, sampling_rate=250)

Once you have a raw ECG signal in the shape of a vector (i.e., a one-dimensional array), or a list, you can use ecg_process() to process it. Note: It is critical that you specify the correct sampling rate of your signal throughout many processing functions, as this allows NeuroKit to have a time reference.

[20]: # Automatically process the (raw) ECG signal signals, info= nk.ecg_process(ecg_signal, sampling_rate=250)

This function outputs two elements, a dataframe containing the different signals (raw, cleaned, etc.) and a dictionary containing various additional information (peaks location, . . . ).

5.15. Extract and Visualize Individual Heartbeats 71 NeuroKit2, Release 0.0.39

5.15.2 Extract R-peaks location

The processing function does two important things for our purpose: 1) it cleans the signal and 2) it detects the location of the R-peaks. Let’s extract these from the output.

[21]: # Extract clean ECG and R-peaks location rpeaks= info["ECG_R_Peaks"] cleaned_ecg= signals["ECG_Clean"]

Great. We can visualize the R-peaks location in the signal to make sure it got detected correctly by marking their location in the signal.

[22]: # Visualize R-peaks in ECG signal plot= nk.events_plot(rpeaks, cleaned_ecg)

Once that we know where the R-peaks are located, we can create windows of signal around them (of a length of for instance 1 second, ranging from 400 ms before the R-peak), which we can refer to as epochs.

5.15.3 Segment the signal around the heart beats

You can now epoch all these individual heart beats, synchronized by their R peaks with the ecg_segment() function.

[23]: # Plotting all the heart beats epochs= nk.ecg_segment(cleaned_ecg, rpeaks=None, sampling_rate=250, show=True)

This create a dictionary of dataframes for each ‘epoch’ (in this case, each heart beat).

5.15.4 Advanced Plotting

This section is written for a more advanced purpose of plotting and visualizing all the heartbeats segments. The code below uses packages other than NeuroKit2 to manually set the colour gradient of the signals and to create a more interactive experience for the user - by hovering your cursor over each signal, an annotation of the signal corresponding to the heart beat index is shown.

Custom colors and legend

Here, we define a function to create the epochs. It takes in cleaned as the cleaned signal dataframe, and peaks as the array of R-peaks locations.

[29]:%matplotlib notebook plt.rcParams['figure.figsize']=[10,6] # resize

# Define a function to create epochs def plot_heartbeats(cleaned, peaks, sampling_rate=None): heartbeats= nk.epochs_create(cleaned, events=peaks, epochs_start=-0.3, epochs_

˓→end=0.4, sampling_rate=sampling_rate) heartbeats= nk.epochs_to_df(heartbeats) return heartbeats heartbeats= plot_heartbeats(cleaned_ecg, peaks=rpeaks, sampling_rate=250) heartbeats.head()

72 Chapter 5. Examples NeuroKit2, Release 0.0.39

[29]: Signal Index Label Time 0 -0.188295 141 1 -0.300000 1 -0.182860 142 1 -0.295977 2 -0.177281 143 1 -0.291954 3 -0.171530 144 1 -0.287931 4 -0.165576 145 1 -0.283908

We then pivot the dataframe so that each column corresponds to the signal values of one channel, or Label.

[30]: heartbeats_pivoted= heartbeats.pivot(index='Time', columns='Label', values='Signal') heartbeats_pivoted.head() [30]: Label 1 10 11 12 13 14 \ Time -0.300000 -0.188295 -0.130393 -0.133485 -0.142082 -0.129604 -0.128826 -0.295977 -0.182860 -0.129565 -0.132160 -0.141467 -0.128912 -0.128071 -0.291954 -0.177281 -0.128556 -0.130639 -0.140687 -0.128097 -0.127219 -0.287931 -0.171530 -0.127327 -0.128864 -0.139683 -0.127114 -0.126234 -0.283908 -0.165576 -0.125839 -0.126767 -0.138381 -0.125906 -0.125059

Label 15 16 17 18 ... 32 33 \ Time ... -0.300000 -0.134212 -0.126986 -0.128488 -0.122148 ... -0.129433 -0.155517 -0.295977 -0.132954 -0.125978 -0.127522 -0.121446 ... -0.128514 -0.154457 -0.291954 -0.131499 -0.124859 -0.126416 -0.120663 ... -0.127415 -0.153226 -0.287931 -0.129797 -0.123588 -0.125143 -0.119759 ... -0.126073 -0.151793 -0.283908 -0.127774 -0.122108 -0.123664 -0.118693 ... -0.124416 -0.150128

Label 34 35 4 5 6 7 \ Time -0.300000 -0.097891 -0.186199 -0.115438 -0.133537 -0.136313 -0.137583 -0.295977 -0.096876 -0.190515 -0.114778 -0.132642 -0.134961 -0.136883 -0.291954 -0.095801 -0.194216 -0.113998 -0.131618 -0.133385 -0.136045 -0.287931 -0.094629 -0.197353 -0.113073 -0.130441 -0.131522 -0.135024 -0.283908 -0.093312 -0.199973 -0.111967 -0.129071 -0.129285 -0.133753

Label 8 9 Time -0.300000 -0.137964 -0.129776 -0.295977 -0.137485 -0.128841 -0.291954 -0.136902 -0.127814 -0.287931 -0.136184 -0.126646 -0.283908 -0.135299 -0.125278

[5 rows x 35 columns]

[31]: # Import dependencies import matplotlib.pyplot as plt

# Prepare figure fig, ax= plt.subplots() plt.close(fig) ax.set_title("Individual Heart Beats") ax.set_xlabel("Time (seconds)")

# labels= list(heartbeats_pivoted) labels=['Channel'+x forx in labels] # Set labels for each signal (continues on next page)

5.15. Extract and Visualize Individual Heartbeats 73 NeuroKit2, Release 0.0.39

(continued from previous page) cmap= iter(plt.cm.YlOrRd(np.linspace(0,1, int(heartbeats["Label"].nunique()))))#

˓→Get color map lines=[] # Create empty list to contain the plot of each signal

fori,x, color in zip(labels, heartbeats_pivoted, cmap): line,= ax.plot(heartbeats_pivoted[x], label='%s'%i, color=color) lines.append(line)

# Show figure fig

Interactivity

This section of the code incorporates the aesthetics and interactivity of the plot produced. Unfortunately, the inter- activity is not active in this example but it should work in your console! As you hover your cursor over each signal, annotation of the channel that produced it is shown. The below figure that you see is a standstill image. Note: you need to install the ``mplcursors`` package for the interactive part (``pip install mplcursors``)

[32]: # Import packages import ipywidgets as widgets from ipywidgets import interact, interact_manual

import mplcursors

[33]: # Obtain hover cursor mplcursors.cursor(lines, hover=True, highlight=True).connect("add", lambda sel: sel.

˓→annotation.set_text(sel.artist.get_label())) # Return figure fig

{ “cells”: [ { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “# Locate P, Q, S and T waves in ECG” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “This example shows how to use Neurokit to delineate the ECG peaks in Python us- ing NeuroKit. This means detecting and locating all components of the QRS complex, including P-peaks and T-peaks, as well their onsets and offsets from an ECG signal.” ]

74 Chapter 5. Examples NeuroKit2, Release 0.0.39

}, { “cell_type”: “code”, “execution_count”: 4, “metadata”: {}, “outputs”: [], “source”: [ “# Load NeuroKit and other useful packagesn”, “import neurokit2 as nkn”, “import numpy as npn”, “import pandas as pdn”, “import matplotlib.pyplot as pltn”, “import seaborn as snsn”, “%matplotlib inlinen”„ “plt.rcParams[‘figure.figsize’] = [8, 5] # Bigger images” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz. “ ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “## Find the R peaks” ] }, { “cell_type”: “code”, “execution_count”: 5, “metadata”: {}, “outputs”: [], “source”: [ “# Retrieve ECG data from data folder (sampling rate= 1000 Hz)n”, “ecg_signal = nk.data(dataset="ecg_3000hz")[‘ECG’]n”, “# Extract R-peaks locationsn”, “_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2. ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located. “ ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “Let’s visualize the R-peaks location in the signal to make sure it got detected correctly.” ] }, { “cell_type”: “code”, “execution_count”: 6, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOydd3hcxdXGf6PeLLlILrggV1ywsY0xYAM22IQeSiChJsCXOIFAKkmAJIRACKTDRwslfIQQIAkxIfRqsDFgYxuDe5ObZNnqXbvSauf74+7dvSvvyrJ2Zna1uu/zgLZ53numnDNz5pwZIaXEhQsXLly4cJHYSIn3A7hw4cKFCxcuDg3XYLtw4cKFCxe9AK7BduHChQsXLnoBXIPtwoULFy5c9AK4BtuFCxcuXLjoBXANtgsXLly4cNELoMRgCyGeEEJUCCHWR/n+CiHE54H/PhRCHKOC14ULFy5cuOgrULXCfhI4s4vvdwLzpJTTgDuBRxXxunDhwoULF30CaSoKkVIuFUIUd/H9h463HwMjulNuYWGhLC6OWmxiYcsW6+9RRyUXV1/hNC2fCb7ezKH72U21dzzGTTx4k3msGuZcvXp1lZSyKNJ3Sgz2YeJ/gNeifSmEWAQsAhg1ahSrVq0y9Vyx4ZZbrL93351cXH2F07R8Jvh6M4fuZzfV3vEYN/HgTeaxaphTCLE76neqjiYNrLBfllIe3cVvTgUeAk6SUlYfqsxZs2bJXmOwXbhw4cKFixghhFgtpZwV6TtjK2whxDTgceCs7hhrFy5cuHDhwkUIRtK6hBCjgMXAVVLKrSY4jeNLX7L+SzauvsJpWj4TfL2ZQ/ezm2rveIybePAm81iNF2cEKFlhCyGeBeYDhUKIUuAXQDqAlPLPwG3AIOAhIQSAL9qSv9ei2qDTwCRXX+E0LV8Uvvb2dkpLS/F4PLFz3HCD9XfTptjLMsyRddFFjHj2WUuJ6ICp9o7HuIkHbzKP1XhxRoCqKPHLDvH914Gvq+By4SKZUVpaSr9+/SguLiYwue05UgIONJ2RrRo4pJRUt7VRetlljFZWqgsXvR/uSWcuXCQQPB4PgwYNit1Y92IIIRiUmYln2LB4P4oLFwkF12C7cJFg6MvG2oYQAtx6cOEiDPHIw05OLFiQnFx9hdO0fCb4+vXrvRz9+kFWlp6ywVx7x2PcxIM3mcdqvDgjQFketg64edgu+ho2bdrEpEmT4v0Y3HXXXTzzzDOkpqaSkpLCI488wmOPPcYPfvADJk+erJQrLy+Ppqamgz5PlLpw4cIkEiIP24ULF70DH330ES+//DJr1qwhMzOTqqoq2traePzxx+P9aC5cxAWfl9YxckAOA3Iz4voc7h62Kpx1lvVfsnH1FU7D8jWeejq+M7q6L0cBtm61/jtMlJeXU1hYSGZmJgCFhYUcccQRzJ8/P3hU8F/+8hcmTJjA/OOP5xtf/jI3BNK7rr76ar7zne8wZ84cxowZw/PPPw9AU1MTCxYsYObMmUydOpUXX3zx0M9eUXHYz95tmGrveIybePAm8VgFqDllIVuPPdkoZyS4K2xVaG1NTq4A6moayEpPReOu4sFI0jr1tHewYcd+CnIy6Mrh+8uXNrBxX0PPiVpbrL9LQjmkk4/I5xfnTenyn33hC1/gjjvuYMKECSxcuJCvfOUrzJs3L/j9vn37uPPOO1mzZg399u3jtKuv5pjBg4Pfl5eX88EHH7B582a++MUvcvHFF5OVlcULL7xAfn4+VVVVnHDCCXzxi1+MHmAnJfj9PZf9UDDV3nEYqwB791XTPycDA1EMFpJ0rNrI8nmRPuO0B8FdYbvoFjaXN7B2T228HyMp0NZhGaImT3ucnyQy8vLyWL16NY8++ihFRUV85Stf4cknnwx+v3LlSubNm8fAgQNJT0/nkjPOCPv3F1xwASkpKUyePJkDBw4AVm71rbfeyrRp01i4cCFlZWXB71yoR1ltKxvK6uP9GFpQ3dxGgycBrGcc4K6weyEk0NrWQU68H8RFj2CvKQ8V7nmolfAhEcOVgKmpqcyfP5/58+czdepU/vrXvwa/O1Sgqu1Kd/7273//O5WVlaxevZr09HSKi4vVnObmos9h24FGAE6I83PEA+4KuxeiosHD56V1fLijKt6P4qIHSPQ86y1btrBt27bg+7Vr13LkkUcG38+ePZv333+f2tpafD4f/37zzUOWWV9fz+DBg0lPT2fJkiXs3h31BsGkQ4c/cTNxXPQuuCtsVTj3XGNUa6bOZc2eOkZVNDFnbKERznfGzgYMz2oN1qlRLqz6zEgTzNZJUlDQo3/W1NTEjTfeSF1dHWlpaYwbN45HH32Uiy++GIDhw4dz6623cvzxx3NEYSGTJ0+m4BBcV1xxBeeddx6zZs1i+vTpTJw48dDPXlfXo+fvFgy1d8kJ83l2xV5O21HNiWMHGeGEOIxXg+MnHrooLvovAtw87F6In76wjr+v2MOd50/hqhOLjXAW3/wKALvuOccIXzKj2etjyi/eIDs9lU13hkeK95bc46amJvLy8vD5fFx44YVce+21XHjhhUo5ektddIXfv7GFB5Zs54enT+DGBeON8SbzeI2HbCY5u8rDdl3ivRAJ7lF10Qdw++23M336dI4++mhGjx7NBRdcEO9HSkjYY9X1irtQAXeFrQrz51t/33tPO9XOo4/jQIOHbf94ydgK++NRUwE4Yc86I3yA0To1ydXk9bF+/AxShGD27s/DvlO6qowh6CzuHFu2sGnfPiadeqracm0Yau+9x8ymrLaVj5/4N99bOEErlw0pJSuOnAYYHK8Gx088dJFJTneFnWRI3CmWWtS2tOFP4AllrIgmWSJPok1BSmnlYvd6WEtsk6IkRbW5iAjXYLtISKzcWcOW/Y3sqYnPwRM60ZVBzsrKorq6uk8bbSkl1V4vWeXl8X6UmNHdFD6V6Ls9J/nhRon3QvSFLezaljYKAK+vwwhfVVMbWekp5Bng6kqhjhgxgtLSUiorK2Mn2r/f+qvzxDBNHFnr1zPi2WfhiiuUlhsvJPsErM3nxy+l2ZMQ+yBcg60Ire0dNHp8DD70T110A8FJiSE9t70iMQ5jSE9PZ/To0WoKu+4666/OfUVdHHa5GlBS2URFSTXTRvQ3dviQWZe4+cnBmsApiPEeP8kO12Arwj15U+nIhV8Z4Fo39wus3FmLxlCig/DyROvge1MDUgjByxNPZuLQfkbkNCmflBZfagocr5Poy1/WWbpeDo3P/vqG/eybeDL14ws549A/jwnb55/F25sqKDDoqJaYH68m+UzLJqU0zhkNrsFWhL/OsPLzTBjslWddytMf7+FOA1w2np5pTj4n54KJg7nSEBcYkk9afOmpgl/q5Ln+ep2l6+XQ+OxpKYKnZ55D1kmjtRvsDRdcydPZW7jOcNCZ6fFqks+0bPGoz2hwDbYiZLVb5yJLKbUfPZnubQ3ymYJpPhHgTPea4Y1HfaZ1aI5GaAnc1pWj0fGri0PjswuEVf8e/QGNaR5rrJqMO5HIpNYPpmWLF2ckuHnYimDn6R2363NSU/QOz76Qh/3OpgPknrGQ/jkZTNy8WjufSfnqWtrYPPFYEIITOuVhq4ScPx8pIeX997RxbJ54LIV5mRSu+lBtwRrzeh9bWsLUK89nWEE2R65bqbx8J8qmH8/emhaWPvovfnzmIY5jVQSvr4NPx0wHzI1Xk+PHtC7q8Es+KTaX1649D1sI8YQQokIIsT7K90II8b9CiO1CiM+FEDNV8CYiEnkC1JuQzKe5meoi++paWbmzmvoWfdd41rW0BQP2ehuSdaS6KkgtEkmnq8rDfhI4s4vvzwLGB/5bBDysiDfhkDhN66Kvo7LRa/1t8sb5SRILyTwZdJHcUGKwpZRLgZoufnI+8JS08DHQXwgxTAV3oiGBJmO9GsI+ISrOz6EDpmVKpBVCd+HzS3ZXt9Ds9cX7UXodemFzJzQSqTpNnXQ2HNjreF8a+OwgCCEWCSFWCSFWKTk8wjBkQjVvL0YSr4JsA6pbxN68ktxf76G8vpXFa0rj/Si9Dq4OUotEmgCZihKPpDoiVoOU8lHgUbCCznQ+lEo8P3UhANMNPPGa087nox3VHKOfKghbPmN52AHOCUP6YeKCRdPyPT91ISlCbx72G7POpKLRyyKNHLrqbdX8L/LB9iomdagfUEIInp+6kJPGF1KsvPRwbD7jIl5bv9/4gUrx6M+m+EzLFi/OSDBlsEuBkY73I4B9hriNwG5QE3l6n552Ps/n7ImLwf69IT5bqZ48vlCrwbFhUj4Z4EtLEfxWI8/bx5/F1gNNfEPjJFJXvX228AKez97FzxWXa+P5qQvpN7cY3ZeCbj7zSzwvtmAgIz4IKc2PV5N8pmWTSOOc0WDKYP8XuEEI8RzWoqJeStn7T/Z3YEBLPWDGfZLTUMuAlnqjji9bPlMQAc68RjNd1KR8Ulp8aZrT/wqa6hnQ0qT1xjNd9ZbTUGf1cU3PPqClnuyGWi1lO5FZV2187EjMj1eTfMbrU5rnjAYledhCiGeB+UAhcAD4BZAOIKX8s7BOEnkAK5K8BbhGSnnIBOvemIc9bfun5GToNTJ2HvbWf7zEV5M0D3vp1koyFp5GQXY6k7as0c5nUr7KRi87psxCCMHxGvOwPx83g5Y2HwUrljNpWL4WDl31tmvqbPbXt7L+7y/y9ZPHKC37Lx/sZMrlX2RoQRbF6z5RWnZnxCMPu9HTzoYJVuasm4cdOzztHawday6vvas8bCWWRUp52SG+l8C3VXAlOkwGKPTimKJDwg6Y6jVBDIcB00FBiRQ0c7jQ8ezJPG4gOceMCwvufdiK4Q4WF4eE4U7SG6OGQ/dI975nd5FcSKQJr2uwFcNkzmsC9SNtSPbVkAkkksI5XPTmZ48X3DpTi0SaNLoGWzESp2ldJCrcP “text/plain”: [ “

” ]

5.15. Extract and Visualize Individual Heartbeats 75 NeuroKit2, Release 0.0.39

}, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” }, { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAEvCAYAAACHVvJ6AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOydeZgcVdW439uzr9n3hSSEEAJhCfkSBIFAIGwiiyAKRkAhioJ+gP5E/AQVUFTc2Xf9EFBRPxFQZIewJoEEyL5nJstMZt+X7jq/P6qrp2qmu2d60unq6Tnv8+RJd1dN150z995zz7nnnGtEBEVRFEVR0puA3w1QFEVRFKV3VGEriqIoygBAFbaiKIqiDABUYSuKoijKAEAVtqIoiqIMAFRhK4qiKMoAINvvBsRj5MiRMmXKFL+bMXBZv97+/+CD/W1HuqDyiI3KJj4qHy8qj/3GihUrqkRkVLRraa2wp0yZwvLly/1uxsDlu9+1///JT/xtR7qg8oiNyiY+Kh8vKo/9hjFme8xr6Vw4Ze7cuaIKW1EURRksGGNWiMjcaNd0D1tRFEVRBgCqsDOZz3zG/qfYqDxio7KJj8rHi8rDF9J6D1vZR6qr/W5BeqHyiI3KJj4qHy/7II/Ozk7Ky8tpa2tLYoMGHvn5+UycOJGcnJw+/4wqbEVRFCVllJeXU1JSwpQpUzDG+N0cXxARqqurKS8vZ+rUqX3+OXWJK4qiKCmjra2NESNGDFplDWCMYcSIEQl7GVRhK4qiKCllMCtrh/7IQF3imczChX63IL1QecRGZRMflY+XDJDHbbfdxuOPP05WVhaBQID77ruPBx54gOuuu45Zs2Yl9VnFxcU0NTXt8/doHraiKIqSMtauXcshhxziaxvefvttrrvuOl599VXy8vKoqqqio6OD8ePH75fnxVLY0WShediKosRlZVkd9S2dfjdDUVLC7t27GTlyJHl5eQCMHDmS8ePHs2DBgkh1zYceeogZM2awYMECrrzySq6++moALrvsMr7xjW9w7LHHMm3aNJ566ikAmpqaWLhwIXPmzGH27Nn84x//SHq7VWFnMmecYf9TbFQeURER6k48hY1zj/e7KemL9h0vA1weixYtoqysjBkzZvC1r32N1157zXN9165d3HLLLbzzzju88MILrFu3znN99+7dLF26lGeeeYYbbrgBsNO0/v73v/P+++/zyiuvcP3115NsD7buYWcyra1+tyCtCLW0EMCg4S5e2oMW+cF2gk3tfjclfdGx5CVJ8vjhP1ezZldDUr7LYdb4Um4++9C49xQXF7NixQreeOMNXnnlFS666CJuv/32yPX33nuPE088keHDhwNw4YUXsmHDhsj1c889l0AgwKxZs6ioqADshe+NN97I66+/TiAQYOfOnVRUVDB27Nik/W6qsJVBw7KtNQwtzGWm3w1JM9o7Lb+boCgpJysriwULFrBgwQJmz57N73//+8i13ixjx5XuvvePf/wje/fuZcWKFeTk5DBlypSkF4dRha0MKupaOvxuQtoRSuPAUyWz6c0S3l+sX7+eQCDAQQcdBMDKlSs54IAD+PjjjwGYN28e1157LbW1tZSUlPDXv/6V2bNnx/3O+vp6Ro8eTU5ODq+88grbt8c8dKvfqMJWlEGOpQpbGWQ0NTVxzTXXUFdXR3Z2NtOnT+f+++/nggsuAGDChAnceOONzJ8/n/HjxzNr1iyGDBkS9zsvueQSzj77bObOncuRRx7JzJnJ9+Wpws5kPvUpv1uQVrx04DwAjvG5HemGJaKy6Q0dS14GuDyOPvpo3nrrrR6fv/rqq5HXF198MUuWLCEYDHLeeeexaNEiAB599FHPzzjpWiNHjuTtt9+O+rxk5GCDKuzM5lvf8rsFacUD888H4Hs+tyPdEFHZ9IqOJS+DQB4/+MEPePHFF2lra2PRokWce+65fjdJFbaiDHbUJd47ZTUtjB2ST06WZsIOFu644w6/m9ADVdiZzIIF9v8uN89g5snH7XxJbj/L34akGZaobOLR2NbJziPn01GSx4GrtfIioHOLT+hyUVEGOZalFnY82sJpb7VaCS5CU3uQtmD/0wHTuSR2quiPDFRhZzANbUGCOhkrvaBzZ3wEFVB3Pt5Zz8odtf362fz8fKqrqwe10nbOw87Pz0/o59QlnqGELGHNrnqK8rKJnz2oDHZ0Dzs+IV30JpWJEydSXl7O3r17/W6Kr+Tn5zNx4sSEfkYVdoYStGx3VXN70OeWKOmOqqP4BEO2hLSkbXLIyclh6tSpfjdjQKIKO0OxLHhmpn2Yg+bW2i4olUd0LJVNXIKWLZ/ivGyO9rsxaYL2F39QhZ2hBC2Lx+bYEb+3+tyWdMASVB4xEBGVTRxC4bE0qiSPG/xuTJqg/cUfkhJ0Zox52BhTaYz5OMZ1Y4z5rTFmkzHmQ2PMnGQ8V4mNZUF+Zxv5ncktPj9QsURUHjGwRPtKPDpDdt8pUPlE0P7iDyYZkXrGmBOAJuAPInJYlOtnAtcAZwLzgd+IyPzevnfu3LniHCauJEZ1UzsbZ80F4JgdH/ncGv9pD4b4YNqRgMqjO+v2NFA37zhAZRONj3fW0/SJT5KbHWDOllV+NycteGeyHcqq/SX5GGNWiMjcaNeSYmGLyOtATZxbzsFW5iIi7wBDjTHjkvFsJTp6ApMXFUdsLD1dMy6dIRWQkh6kKg97AlDmel8e/kzZT2gqihdV2LHRtK746FhS0oVUKexoGRFRR4ExZokxZrkxZvlgz9PbF3SS8aJKKTYqmvh0jSVN7FL8JVUKuxyY5Ho/EdgV7UYRuV9E5orI3FGjRqWkcZmIKmwvKo3Y6GImPhLllaL4QarSup4GrjbGPIkddFYvIrtT9OxBScgSnpp9CqC5kmArJZVHdFQ28RGBv84+hSEFOWh6i50GqP3FH5KisI0xTwALgJHGmHLgZiAHQETuBZ7DjhDfBLQAlyfjuUps3JNw+h0Sl3rEQuURA0tUNvFwFNS4Ifl83+/GpAHaX/wjKQpbRD7fy3UBvp6MZyl9I2gJw1rq/W5G2mCJyiM2Kpt4CDCspZ7SrHa/m5IWhHRu8Y2k5GHvLzQPu/+s3lVP4zGfBDRXEqCmuYMNh9iFJVUeXpZtqyF0womAyiYab2zcS87CkzUPO0xbZ4iVB2pNg/3Ffs/DVtIPDTrzooFVsdHzsOPjdB2Vko3OLf6hCjtD0UHlRRV2bLSrxEfF4yWoHcY3VGFnKDqkuqECiUk6b4ulA7rY86IeGf9QhZ2h6CTsReeY2KhsekG6/T/I0bLH/qHHa2YoIvDYUWcCmisJtpWk8oiOyiY+gi2f0oIcPQ8b28LW/uIPqrAzFEvgmUNOAOBOn9uSDlgiKo8YqGziY1n2WBpdkseP/W5MGqBzi3+ows5QRIRxDVqL3UEElUcMBJVNPBz5jAjl+t2UtEDQucUvNA87Q3lnSzUsWABoriTAjuoWdh1lH8Gu8vDy8roKCheFS02qbHrw/Oo9DDnjVHKyAhy9VfOwd9a1Unb4PED7y/5A87AHIRrZ6kXlERsVTXxUPl7S2cjLdFRhZyo6pjyoOGKjUeK9oQJyo/raP1RhZyg6pryohR0btZjiowsaJV1QhZ2hqILyokopNqqQ4qNdx4vOLf6hUeIZigg8MO88QHMlwVZKKo/oiIjKJg6CLZ/SfM3DBp1b/EQVdoZiifDS9Pl+NyNtEEHlEQNBZRMPK9x3RhZrWhdof/ETVdgZigDTqsv9bkbaYImoPGLgyMYYv1uSnkhYPsNac/xuSlqgY8k/VGFnKgI/ft6pQ/QVX5uSDlgiKo8YiPaVuDjyyckKwG++5HdzfEf7i39o0FmGooEhXlQcsdG+Eh8J51yolBxUEn6hCjtD0TnYi8pD6S/ad7y45aHZF6lFFXaGolaTF5VHbFQ28dG0Ny+WR2H7147BiCrsDEXHkRdVSrFR0cRHrUgv4ppdVDKpRYPOMhQR+N2xnwM0VxLsiUXlER0r3FeMUdlEw+k7JfnZRD2RYZDhnlseFSELTS9IFUlR2MaY04HfAFnAgyJye7frQ4DHgMnhZ94hIo8k49lKdESEN6cc6Xcz0gaVR2wc2WhaV3Qc+Qwv0jxssL1VzlhS50Nq2WeFbYzJAu4CTgXKgWXGmKdFZI3rtq8Da0TkbGPMKGC9MeaPItKxr89XoiPArIotfjcjbbBE5RELcWSjCjsqjnxK8rOxp7nBjbjGkm41pZZkWNjzgE0isgXAGPMkcA7gVtgClBhjDFAM1ADBJDxbiYEI3PTS/eF31/jalnRA5REbQVQ2cRDsvpOdFQCu8rs5aUFXf/mar+0YbCQj6GwCUOZ6Xx7+zM2dwCHALuAj4JsiYiXh2UoMdOXrReURG42Cjo/2HS9ueahsUksyFHY0R1r3v+JpwEpgPHAkcKcxpjTqlxmzxBiz3BizfO/evUlo3uBEh5EXnVhio6KJj8rHizcP2792DEaSobDLgUmu9xOxLWk3lwN/E5tNwFZgZrQvE5H7RWSuiMwdNWpUEpo3ONFUFC8qjtjoYiY+zlhSMdm4xaB9J7UkQ2EvAw4yxkw1xuQCnwOe7nbPDmAhgDFmDHAwoBFA+xEdR15UHrHRxV18VDpe3P1FZZNa9jnoTESCxpirgeex07oeFpHVxpivhq/fC9wCPGqM+Qjbhf4dEana12crsRGEn51wKQB/87kt6YAlKo9YCPCzEy4lYOApvxuThojY8inOy+IPfjcmDbDC8gB4RCORUkpS8rBF5DnguW6f3et6vQtYlIxnKX3DsuD9iYf43Yy0wRJRecTAsmzZZAU0rysaTt8pzdc6UzZdY0nUxk4p2gMzFAHmlK8NvzvLz6akBSqP2DiyyQoAnOlza9IPEVs+hXlZ2PGzgxtHHgCWaF56KlGFnaGICP/v9d+H333L17 “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize R-peaks in ECG signaln”, “plot = nk.events_plot(rpeaks[‘ECG_R_Peaks’], ecg_signal)n”, “n”, “# Zooming into the first 5 R-peaksn”, “plot = nk.events_plot(rpeaks[‘ECG_R_Peaks’][:5], ecg_signal[:20000])” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by Neurokit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2. ecg_peaks>).” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “## Locate other waves (P, Q, S, T) and their onset and offset” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.html# neurokit2.ecg_delineate>), Neurokit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes. “ ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “### Peak method” ] }, {

76 Chapter 5. Examples NeuroKit2, Release 0.0.39

“cell_type”: “markdown”, “metadata”: {}, “source”: [ “First, let’s take a look at the ‘peak’ method and its output.” ] }, { “cell_type”: “code”, “execution_count”: 7, “metadata”: {}, “outputs”: [], “source”: [ “# Delineate the ECG signaln”, “_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000)” ] }, { “cell_type”: “code”, “execution_count”: 8, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOydeXwV1d24n8nNvkIS9rCFVUAWWRUREBfAnVpRq4Jrterbvupbq7XW1rbqz9raqqi44b60Im6IgiKyiuxLwg6BkAWy73c9vz9uCLkz9xLInJmbhPN8PteQ3OP3me/MOXPuMt85mhAChUKhUCgULZuIcG+AQqFQKBSKplETtkKhUCgUrQA1YSsUCoVC0QpQE7ZCoVAoFK0ANWErFAqFQtEKUBO2QqFQKBStACkTtqZpr2uadkTTtG0hnv+Fpmlb6h+rNE0bJsOrUCgUCsXpgqx32POAqSd4fj8wUQgxFHgcmCvJq1AoFArFaUGkjCBCiB80Tet1gudXNfp1DZBxMnHT09NFr14hw4aVA243AL2ionC6iwGI2Vfkf3LAgIC2xTv9P9MC/9wsF4DTXUzU3nwitCjprlA+CJ6fDF8op105Wp2fXblZ7bGyX1i57Xb2Lbv7cSgn2Dde7Rw/4ToXhXJawfr164uEEB2CPSdlwj5FbgW+CvWkpml3AHcA9OjRg3Xr1tm1XafEP0tKAPjf1FQKS5YA0Onpb/1PPvFEQNslD/l/XhD452a5AApLlpD459dIiOsl3RXKB8Hzk+EL5bQrR6vzsys3qz1W9gsrt93OvmV3Pw7lBPvGq53jJ1znolBOK9A0LSfkc7JuTVr/DvsLIcSQE7SZDMwBzhVCFDcVc9SoUaKlTtgKhUKhUMhG07T1QohRwZ6z7R22pmlDgVeBaSczWSsUCoVCoTiOLWVdmqb1AOYDNwohdtnhtJpfFxTw64ICAA4WfMTBgo/gZz/zP3R89DP/Q4brmK/mkrGWuEL5QuUnwxfKaVeOVudnV25We6zsF1Zuu519y+5+HMpp53i1c/yE61x0ovO7nUh5h61p2vvAJCBd07Rc4I9AFIAQ4iXgUSANmKNpGoAn1Fv+pnC73eTm5lJXVydj05vN1V4vANmlpXi8vf3/vuce/5PZ2QFtewf/c7NcAD46094toNr4QUWNhM8uyn2+gN+9vhr/P4qt8YVyaqXlEB1nibOxz+r87MrNao+V/cLKbbezb9ndj0M5AdvGq53jJ1znolBOu5F1lfh1TTx/G3CbDFdubi5JSUn06tWL+sk/LOx3uQDoHR2N0+W/ejAmqv6A6q4iLKr/HCO9mRcXNnYJISgojKHwxjtJeu2t5gVUKBQKRauj1d3prK6ujrS0tLBO1uFE0zTat0/C1bV7uDdFoVAoFDbS6iZs4LSdrI+haRpEtMpDp1AoFIpmEo467DZBQqMJMyIixv+PpKSgbWOC/7lZrmM+ERMDU6YY2vY2/umUGRsX+H1bQpz/O3qrfKGc3knjIa6XJc7GPqvzsys3qz1W9gsrt93OvmV3Pw7lBGwbr3aOn3Cdi0I57UZaHbYVBKvDzs7O5owzzgjTFvn561//ynvvvYfD4SAiIoKXX36ZV155hfvuu49BgwZJdSUmJlJVVWX4e0vYDwqFQqGQS4uow24rrF69mi+++IINGzYQExNDUVERLpeLV199NdybplAoFAoL2JJbRvf28bRPiA7rdqgvQk+R/Px80tPTKdA0clwu0tPTSUuP4byJ41n38cewaxevvfYa/fv3Z9KkSdx+++3cdsM9FO+C2bNn8z//8z+cc845ZGZm8t///heAqqoqpkyZwllnncWZZ57Jp59+GuDMcbnIqb9SHMDpKkYU5sG0aYbte3ea/2GGX+bn88v8/OP+/HfJyX/X79M5ZfhCOWvPH2ZLjlbnFyy3yklD8VxsXC/HjM/qfWhlv7By2+3sW3b341BOO8erneMnXOeijXveYNfICUGPo5206nfYf/p8O1l5FVJjDuqazB8vGxzy+Ysuuog///nPTBo8mPHnn8/t11/PuLPr2wtBXkEBjz/+OBs2bCApKYnzzz+f/j2Gceybh/z8fFasWMGOHTu4/PLLufrqq4mNjeWTTz4hOTmZoqIixo0bx+WXX95wcZ3PsBUCBFBba3jGbfzTKePUfU3iE27Ka910qqwmNjLwNZ4MXygndU7wWZ+jT/hv9m/X/vT4XNSUV5NbexT9lxpmfFbvw6DxQcp+s3Lb7exbdvfjUM7c0hpSyypJigk8xVvhtHP8WO3S+445oyN9iJoaqDXW09uJeod9iiQmJrJ+/Xr+NmcOqR06MHPmTN566/2G59du3crEiRNJTU0lKiqKn//85wH//5VXXklERASDBg2isLAQACEEDz/8MEOHDuWCCy7g8OHDDc+1FLLzK9h0sDTcm9EmOHY+qKpzh3dDFG2W3NJath8uD/dmWEJJtYuKOk+4NyMstOp32Cd6J2wlDoeDcRMnMm7iRM4bPpw33nil4bmmLuKLiYkxtH333Xc5evQo69evJyoqil69ejV5J7eWfLGg4uRQR1ChOHV2FVbizitnXGZauDfFdtQ77FNk586d7N69u+H3TZs20aPH8eW9xwwdyrJlyygtLcXj8fDxxx83GbO8vJyOHTsSFRXF0qVLyckJuboaAG6vDwGU16p3aApFi0e9MlNIolW/ww4HVVVV3HvvvRSXlRHpcDCgXz9emPMPrp15IyQm0q1/fx5++GHGjh1L165dGTRoEInRKcSmhI75i1/8gssuu4xRo0YxfPhwBg4cGPB8kq4O2+mJwhkVzZ4xExmpi9X/UvM5ToyPD/TH9+eng9ns6zOGcZcEfusqwxfK6Zt2QdD6Vdk5JsX39//jUmNgK/ZnXGw/fhw0kp1Hkhkj0Wf1PgwWH5Cy36zcdjv7lt5VVNOVrZlDGdpjMHqblWPHzvFq5/gJ17kolNNuVB22BVRVVZGYmIjH4+Gqq67illtu4aqrrpIW/3BpDVnZOyjQUrnx7F7S4p6IXr/7EoADT15ii68tU+30MPiPXxMX5SD7ceOV4oq2w9+/3snzS/dw/4X9uXdKP9u8bXm8hiM3O50nqsNWH4lbwGOPPcbw4cMZMmQIvXv35sorrwz3JikUijBw7C7Kvpb7vkjRilDvsJtJ0NW69odYrWun/6eM1boAKmqOcHDNFtIfeITOG9YEtJ03yf9z9vfNcwHMzsvzx+ra1e/Pm8eafcX0vn6e/0KP748Hl+EL5ex89ZPERXcO8MlyNvbtz5sHQO/r/T9l56fPbU/uGyRd9hdyShIYk7MloK0Zn9X7MFh8kLPfrNx2O/uW3rV8+xz63/AMbl86GZt/lOoK5bR7vNo5fsJ1LgrltAL1DlvRKvF4Bb4W/ILSLG03M8Vx/G+xbT3WqmO1WdSErWiRVNZ5qHF5OFgi6W4ICsVpgpqv2y5qwla0SDxe//3dnB6vLb7iKhdVztPzZgwK6zi+EHDbnkZdHh91bnvG6umMmrAl4RMCt9d4E1GFSWw6z+0+Usm2NnpnKEUgdW4vFbVualw2TjBte75mw8FSNh0qC/dmtHlUHXYzWLRoEff8+tf4vF7uuO02HnjgXgrLa3FExtOtnfFes3HtzfmSdXXYLm8UdVEx5Jx/CZ11bQdfY84FcHFCQsDvKQmDWbFvG9sHTmDclUOk+4I5a319WDr0HDyiK/pr9WTnaHV++tziYs9g6dBzyC5MYqyurRlfsOPm+9kVENfT0LY5nmDxAbjGGOxU41u57frYO492oW7oObRP7M3FJmM35TpS04PcoeeQltSb7pJdoZxJCYNYsW+7bePVzvETjnNR8gn2p92oq8RPEa/XS//+/Vm8eDEZGRmMHj2a999/H0+y/4rCoRntLN+G06EOe3FWIbe/tY4pAzvy2uzRlvvszK+8xs2wP39DlENj91+nW+5TBDL3h738beEObju3N49cKnf9ej0vLN3D01/v5K5JfXhw6sCm/wcJuDw++j/yFWDfeLVz/Nh9LvL5BJkPL7TNqa4Sl8jatWvp27cvvXr3JjIqimuvvZYFCxYQoUGEEAiv8WM2n9f/aC4+obtaWoMIfDjqgqxYU+N/mKHW56PWd/zjfZ/PTYzDS6y7DmoCg8vwBXNquEnx1RDlNN5TXXaOVudn2J/Cn1uMS25uwY6br6rckFNzPUHj+9z++Cb3m5Xbro/t0Lyk+GqItGD86F0Rmsffjy0aq8GcXp/L1vFq5/gJx7noRE67afUfiR+rvWvM4Gtg9K/8B+/dIG9ghs/2P2qK4KOrA59rqobv8OHDdO/enRy3/z7eGRkZrFr1PV2S6/x12DVHDHXYJXv8P5tbh33MdawOOym6ik51ZQx+4B7Q1WEfy9dMLeJdBQXA8TrEnIJ3eWyavw6R9c8F1CHK8AVzpkUv5JvFT1H6WTv45fqAtrJztDo/fW5FJR/yzeK/sL8kAZ6eEdDWjC/Ycev8syeJizHWGzfHEyz+zoJKRt3xPumJ0ab2W9BtD1ErbTb2Genf8c3ip6n7Kg2uXmsqdlOuMzsu5ZvFf8e5KA2ulOsK5cw7+j6PTSuxbbzaOX7CcS46WPheSKfdSHmHrWna65qmHdE0bVuI5zVN0/6tadoeTdO2aJp2lgxvOAj2FcKxdav9DWzcGIXiBOSV1VJR58Zj4W22Smtc7DlSaVl8K1FDVdHakPUOex7wPPBWiOenAf3qH2OBF+t/muZEr6ai4k/8fHz6qb8ay8jI4NChQw2/5+bm0qWr/tIvhSL8HK100hf/6m6t/qO0VorWdBOF4qSR8g5bCPEDUHKCJlcAbwk/a4B2mqZ1keG2m9GjR7N7924O7d+Py+Xigw8+4NJL1QIO8gnDHaLaKC34utKQeHwCp9uHt5XfhDssW9+6d5niBNj1wrsbcKjR “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” }, { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAEvCAYAAACHVvJ6AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOydd3wcxfn/33NFvVmS5SYb27hgG4MBg+kYMMWmB0INiYEEQkkIhISS8E3yggRCSAL5pVACIYUSQgndBIPpxcbGGBdkG1fZ6pbVy93t/P7Y27076U6629s7aaV5v1730mnL7NgfzTw7M8/zjJBSolAoFAqFYnDjGugKKBQKhUKh6B9lsBUKhUKhcADKYCsUCoVC4QCUwVYoFAqFwgEog61QKBQKhQNQBluhUCgUCgfgGegK9EVpaamcOHHiQFdDAWzz+QCY6PUmdF+XrwGATG9J75MVFfrP6dOTqltPGoLFlthbrOOwXbMU6QVKs1hY1RDSr6PS0B5WrlxZL6UcGe3coDbYEydO5NNPPx3oaiiA3+/ZA8ANxcUJ3VezZykAo4oX9D556636z7vuSqpuPVkaLHaBvcU6Dts1S5FeoDSLhVUNIf06Kg3tQQixPea5wZw4Ze7cuVIZbIVCoVAMF4QQK6WUc6OdU2vYCoVCoVA4AGWwFXFxfXU111dXJ3zfjuqn2VH9dPST556rf2zm6XP1z3DHds1SpBcozWJhVUNIv45Kw9QzqNewFYOHJk2zdF9Aa499sqHBYm36pj01xToO2zVLkV6gNIuFVQ0h/TrGq6HP56OyspLOzs6U1MMpZGVlUV5ejjcBh0JlsBUKhUKRNiorK8nPz2fixIkIIQa6OgOClJKGhgYqKyuZNGlS3PepKXGFQqFQpI3Ozk5KSkqGrbEGEEJQUlKS8CyDMtgKhUKhSCvD2VgbWPk/UFPiiriYl51t6b7c7D6me0480WJt+mZSaop1HLZrliK9QGkWC6saQvp1dJqGv/zlL3niiSdwu924XC4efPBBHn74YW688UZmzpxp67Py8vJobW1NuhwVh61QKBSKtLFhwwZmzJgxoHX46KOPuPHGG3n77bfJzMykvr6e7u5uxo4dm5LnxTLY0f4vVBy2QqFQDGM++qqB1i7/QFdj0FBVVUVpaSmZmZkAlJaWMnbsWObPn29m13zkkUeYNm0a8+fP5zvf+Q7XXXcdAIsXL+b73/8+Rx55JJMnT+aZZ54BoLW1lRNPPJGDDz6Y2bNn88ILL9heb2WwFXFxVVUVV1VVJXzf9qrH2V71ePSTCxfqH5t5fKH+Ge7YrlmK9AKlWSysagghHdu6/Fz08Mdc8/gq/YRqd5x88sns3LmTadOmcc011/DOO+9EnN+9ezd33HEHH3/8MW+88QZffvllxPmqqiref/99Xn75ZW655RZAD9N6/vnnWbVqFcuWLeOHP/whds9gqzVsRVx0WfzD06Qv9smODou16Rtfaop1HHZoFtAk7d1+8rO8KdMLlGaxsKohhHTc26H/XL+7ST8xiNrdL15ax/rdzbbWY+bYAn52xqw+r8nLy2PlypW89957LFu2jAsuuIC7777bPL98+XKOO+44ioM53L/+9a+zceNG8/zZZ5+Ny+Vi5syZ1NTUAHqo1m233ca7776Ly+Vi165d1NTUMHr0aNv+bcpgKxSKmPzm9QoeeOcrtvxqkZqOcyjdfiP5ivLMDsftdjN//nzmz5/P7Nmz+fvf/26e629kbEylh1/7+OOPU1dXx8qVK/F6vUycONH25DDKYCsUipg88M5XAHT5Naz7KysGkoCmGxR/ElnTUkV/I+FUUVFRgcvlYurUqQCsXr2affbZh7Vr1wJw2GGHccMNN9DY2Eh+fj7PPvsss2fP7rPMpqYmysrK8Hq9LFu2jO3bY266ZRllsBUKRb90+QPKYDsUYwToDwzeiKB009rayve+9z327t2Lx+NhypQpPPTQQ5x33nkAjBs3jttuu4158+YxduxYZs6cSWFhYZ9lXnLJJZxxxhnMnTuXOXPmsN9++9leb2WwFXFxXE6Opfvyc6bFPnn66RZr0zfTUlOs47BTsy6/ljK9QGkWC6saQkjH+k7dUPsCwRG2ancccsghfPjhh72Ov/322+b3iy++mCuvvBK/388555zDySefDMBjjz0WcY8RrlVaWspHH30U9Xl2xGCDMtiKOLmsqMjSfaVFR8Y+edNNFmvTN0empljHYadmfk2mTC9QmsXCqoYQ0rE26NTlD06Nq3YXHz//+c9ZunQpnZ2dnHzyyZx99tkDXSVlsBUKRf9omppOdSpacEo8oDRMiHvvvXegq9ALZbAVcbF4924AHkswE9DW3Y8BMGns4t4n58/Xf4ZNQ9nBY8FiF9tbrOOwUzNNypTpBUqzWFjVEEI6alIfGZqpq1W7cywqUkOhUPSLGpw5FzWyHjoog61IG03tPlZs2zPQ1VBYQBvEew4o+kbZ66GDMtiKtPHD/6zm6w98RFNHH9nPFIOSwbxJkKJv1MvW0EEZbEXaWBf0Vm3pVAbbaQQGX84NRZwYDoMqz1kkS5YsYfr06UyZMiUiLelgRjmdKeLilNxcS/cV5oYyGXnd+vthpy/Y+59/ftL1isas1BTrOOzQzECTMmV6gdIsFlY1hJCOVR09Rtiq3REIBLj22mt54403KC8v59BDD+XMM8+0fR9su1EGWxEXF/WT5ScWxYWHmt/dLv0d30zgcM01SdcrGoempljHYYdmBpqUKdMLlGaxsKohhHSUdfWRJ1S7Y/ny5UyZMoXJkycDcOGFF/LCCy8MeoNty5S4EOJRIUStEGJtjPNCCPEHIcRmIcQaIcTBdjxXkT46NI0OC7mINc2HpulT4EF7HdqMoL1d/9iMr13/DHfs0Cx0jJTpBUqzWFjVEEI69vISV+2OXbt2MX78ePP38vJydu3aNYA1ig+7RtiPAX8E/hHj/EJgavAzD/hL8KfCIVxdXQ0kHg+6vVrfV3nS2MXmCLvbGGEvWqT/tDke9PFgscM9HtQOzQw0KVOmFyjNYmFVQwjpqMlFkScGWbsz4rfDmXW+PmL3tYfKDWfOYv3TXg9Pnxd5Lp7nR3OiFGLwr/LbMsKWUr4L9BWvcxbwD6nzMVAkhBhjx7MVzsEVbBCh7f4UTkF5GjsXpV1vysvL2blzp/l7ZWUlYy28FKWbdK1hjwN2hv1eGTxWlabnKwYBvUbYCsegOn3nMgh31YygrxGxN6fv8zml1mZlDj30UDZt2sTWrVsZN24cTz31FE888UTiBaWZdBnsaHMNUXsAIcSVwJUAEyZMSGWdFGnGNNhqhO04VPIN5xIIvmw5Yco3XXg8Hv74xz9yyimnEAgEuPzyy5k1a2D25k6EdBnsSmB82O/lwO5oF0opHwIeApg7d67qJoYQxiBNbSThPJRmzkUlvYnOokWLWLQoygL5ICZdBvtF4DohxFPozmZNUko1He4gzsrPt3RfUf6cXseMN34WL06iRrGZk5piHYftmqVIL1CaxcKqhhDSMdAjqku1O+dii8EWQjwJzAdKhRCVwM8AL4CU8gHgVWARsBloBy6z47mK9HGOxY5jRFjnL+mxzZ/qOFKKHZoZSIky2AOAVQ0hpKMme0xmqnbnWGwx2FLKi/o5L4Fr7XiWYmBoDAQAGOF2J3SfP6AHZnrcOaEpceNLffDVv7TUljoatAeLzbG3WMdhh2YGmpQp0wuUZrGwqiGEdDTam7mCrdqdY1GZzhRxcUNNDZB4POjOmqcBPabXsNOmk/h5wQBKm+NBjbjM4R7Ta4dmBpokZXqB0iwWVjWEkI6aXBB5QrU7x6I2/1CkDcP1RYUIOQ/ldOZcVBTl0EEZbEXaMLxVVefvPNRLlnNR2g0dlMFWpJ2A6kAch3rHci7m9poqDNvk8ssvp6ysjP3333+gq5IQymAr0o4aYTuPXhtIKByDkq43ixcvZsmSJQNdjYRRTmeKuLigoMDSfcUFc83vIaez4Jerr062WlGZm5piHYcdmhlIKVOmFyjNYmFVQwjp2GtKXLU7jj32WLZt2zbQ1UgYZbAVcbEwL8/SfYV5oSknMw7b6D8uuCDZakVl/9QU6zjs0MxAk6RML1CaxcKqhhDSUZPbIk+odudYlMFWxEWV3w/AGE9ifzI+fxMAXk9h79Skxm4548dHudM6TcFiC+0t1nHYoZlBQMqU6QVKs1hY1RBCOvZaghpk7W7x7t5Zqk/JzeWiwkI6NM3cYjScs/LzOSc/n8ZAwAx9M7ASAucUlMFWxMWttbVA4o2hsvZ5IDKm13Q6u/RS/afN8aDPB4sd7vGgdmompUyZXqA0i4VVDSGkY0AeF3lCtTvHogy2Im0Y7/nKgcl5qNAg5yLNTGeD0028r5eRbJerz/Mj3O4hPaLuifISV6QNFYftXFTyDeeiXpB7c9FFF3HEEUdQUVFBeXk5jzzyyEBXKS7UCFuRNkKZzga0GgoLqBG2c1HtrTdPPvnkQFfBEmqErUgbZliX6vwdh9pT2bmol62hgxphK+LiW4WF/V8UhdLCI8zvxtScOSX+wx8mXa9oHJGaYh2HHZoZBDRSphcozWJhVUMI6dhrCUq1O8eiDLYiLo7PzbV0X37udPO7YbDNEfYZZyRdr2hMT02xjsMOzQw0KVOmFyjNYmFVQwjpGJAbI0+odudYlMFWxMXW7m4AJmVkJHRfV7e+SW5mRil+TfdcMt/4Kyr0n9N7G4hkqA8WW2pvsY7DDs0MpJQp0wuUZrGwqiGEdDSam5G4SLU756IMtiIufhHc9D7REIrd9S8Dekyv4Wlseq1edZX+0+Z40JeDxQ73eFA7NDPQJCnTC5RmsbCqIYR01LTg1LgxM67anWNRTmeKtBEIjrCV05nzUKFBzsVwOlPOZ85HGWxF2vD3dDpTOAbV2TsX4 “text/plain”: [ “
” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize the T-peaks, P-peaks, Q-peaks and S-peaksn”, “plot = nk.events_plot([waves_peak[‘ECG_T_Peaks’], n”, ” waves_peak[‘ECG_P_Peaks’],n”, ” waves_peak[‘ECG_Q_Peaks’],n”, ” waves_peak[‘ECG_S_Peaks’]], ecg_signal)n”, “n”, “# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S- peaksn”, “plot = nk.events_plot([waves_peak[‘ECG_T_Peaks’][:3], n”, ” waves_peak[‘ECG_P_Peaks’][:3],n”, ” waves_peak[‘ECG_Q_Peaks’][:3],n”, ” waves_peak[‘ECG_S_Peaks’][:3]], ecg_signal[:12500])n” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [

5.15. Extract and Visualize Individual Heartbeats 77 NeuroKit2, Release 0.0.39

“Visually, the ‘peak’ method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, peak!n”, “n”, “However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the show argument in [ecg_delineate()](https://neurokit2.readthedocs.io/ en/latest/functions.html#neurokit2.ecg_delineate>) as below.” ] }, { “cell_type”: “code”, “execution_count”: 9, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzde5RcZZn4+++7615dfe9OOumEEAKEhNyABEaCUa4BRXDE28ioOCKHGQGH+Q1yVc8gznLWnHFGxSEiCkfEn44ojCN4cIH85CoQNARCyKWTkHTS9+6qrnvVrv2ePyrvTnXS3enqrnR3kuezVq+kqvauvbuqq5793p5Haa0RQgghxPRmTfUJCCGEEOLwJGALIYQQRwEJ2EIIIcRRQAK2EEIIcRSQgC2EEEIcBSRgCyGEEEeBigRspdSPlFLdSqm3Rnj8aqXUxv0/LymlllfiuEIIIcTxolIt7IeAS0d5fCfwPq31MuDrwP0VOq4QQghxXPBW4km01s8ppU4c5fGXSm7+EZgzludtamrSJ5444tMKIYQQx5TXX3+9V2vdPNxjFQnYZfo88NuRHlRKXQdcB3DCCSewfv36yTovIYQQYkoppd4d6bFJnXSmlDqfYsC+daRttNb3a61Xaq1XNjcPe5EhhBBCHHcmrYWtlFoGPABcprXum6zjCiGEEMeCSWlhK6VOAH4FfFprvXUyjimEEEIcSyrSwlZK/W/g/UCTUqod+BrgA9BarwO+CjQC/6mUArC11isrcWwhhBDDy+fztLe3k8lkpvpUxEGCwSBz5szB5/ONeZ9KzRL/q8M8fi1wbSWOJYQQYmza29uprq7mxBNPZH9j6bAcR5MrOPg9FpY1tn1EebTW9PX10d7ezvz588e831TMEhdCCDEJMpnMmIK1XXDY1p3gD1t72N6dcO8/ZUaENac2c8qMCF6PJMasFKUUjY2N9PT0lLWfBGwhhDiGHS5Yd8YyPPjiTnoTWcJ+L7Nrgyil0FrTPpDmged30BQJ8LnV82mpDU7SWR/7xtrjUUoumYQQ4jjVGctw7++3kc4VmFMfpqHK7wYSpRQNVX7m1IdJ5wrc+/ttdMZkLHwqScAWQojjkF1wePDFnVhKUV/lH3Xb+io/llI8+OJO7IIzSWcoDiYBW4gjKJvN4jjyBSemn23dCXoT2cMGa6O+yk9vIsu2kjHusfB4PKxYscL9+eY3vwkUZ7DfdtttnHLKKSxZsoSzzz6b3/62mAQzkUjwt3/7tyxYsIAzzjiDs846ix/84AcjHmPXrl2EQiFWrFjB4sWLuf7660f83JWz7WgeeughbrjhhrL3mwgZwxbiCLFtm87OTiKRCI2NjVN9OkIM8YetPYT95YWAsN/Lc1t7WDSrZsz7hEIhNmzYcMj9X/nKV+jo6OCtt94iEAjQ1dXFH/7wBwCuvfZaTjrpJLZt24ZlWfT09PCjH/1o1OMsWLCADRs2YNs2F1xwAY8//jgf+chHJrztdCItbCGOkGQyya5du2hvb5/qUxFiCMfRbO9OUB8e+xpggPqwj23dCRxHT+j4qVSKH/zgB3z3u98lEAgAMHPmTD7+8Y/T1tbGq6++yj333INlFUNUc3Mzt946YkbrIbxeL+eeey7bt28va9uenh6uuuoqVq1axapVq3jxxRcBePXVVzn33HM544wzOPfcc9myZcshz/PEE0/wnve8h97eXn7xi1+wZMkSli9fzpo1a8b6koztd6voswkhXP39/fT19ZFIJFi2bNm4ZoUKcSTk9o9Dl/s3abbPFRyClmdM+6TTaVasWOHevv3221m0aBEnnHACNTWHttQ3bdrE8uXL3WBdrlQqxTPPPMPdd99d1rZf+tKXuPnmmznvvPPYvXs3a9euZfPmzZx22mk899xzeL1enn76ae644w5++ctfus/x2GOP8a1vfYsnn3yS+vp67r77bp566ilaW1uJRqPj+h1GIgFbiCNkcHCQXC6HbdsUCgW8Xvm4ienBv39Ntda6rKCttR6y/1gM1yW+cePGMe//jW98g1/84hd0d3ezb9++Ebdra2tjxYoVKKW48sorueyyy8ra9rOf/Sxvv/22u83g4CDxeJxYLMZnP/tZtm3bhlKKfD7vbvPss8+yfv16fve737kXH6tXr+aaa67h4x//eMW72eUbRIgjJJFIkM1mUUqRTCapra2d6lMSAgDLUpw8I8LegTQNY5x0BjCQynPKjMiEM6CdfPLJ7N69m3g8TnV19ZDHFi9ezBtvvIHjOFiWxZ133smdd95JJBIZ9TnNuPRYDLet4zi8/PLLhEKhIfffeOONnH/++Tz22GPs2rWL97///e5jJ510Ejt27GDr1q2sXFnMtr1u3TpeeeUVnnjiCVasWMGGDRsqNodFxrCFOELi8TjZbJZ8Pk8qlZrq0xFiiPed2kwqZ5e1Typns+bUiZc9DofDfP7zn+emm24il8sB0NHRwU9+8hNOPvlkVq5cyV133UWhUACKGdtM6/5IueSSS7j33nvd2yagx2IxWltbgeLM8FLz5s3jV7/6FZ/5zGfYtGkTUGy9n3POOdx99900NTWxZ8+eip2jBGwhjpBUKoVSikKhIAFbTDunzIjQFAkwkMyNafuBZI6mSIBTZoze0j2YGcM2P7fddhsA99xzD83NzSxevJglS5bw4Q9/mObm4sXAAw88QF9fHyeffDJnnXUWF110Ef/yL/9S3i9Ypu985zusX7+eZcuWsXjxYtatWwfAl7/8ZW6//XZWr17tXkCUWrhwIY888ggf+9jHaGtr45ZbbmHp0qUsWbKENWvWsHz58oqdozrSVy0TsXLlSr1+/fqpPg0hyqa15mc/+xn5fJ5MJsOqVas444wzpvq0xHFm8+bNLFq0aMTHTaazwyVPGUjmcLTmhgtOkfSkFTTc+6OUen2kapbSwhbiCHAch3w+7y5ZkfKGYjpqqQ1ywwWnEPJ7aB9I0Z/MuV3PWmv6kznaB1KE/B4J1tOATDoT4gjI5/PupBmlFOl0eqpPSYhhtdQGuWXtQrZ1J3hua8+QTGbTrVrXm2++yac//ekh9wUCAV555ZUJbXu0kIAtxBGQyWSwbRulFI7juBNrhJiOvB6LRbNqWDSrBqdQIJfL4PcHsTxjW2s9WZYuXTrmmeDlbHu0kIAtxBGQyWRwHMf9kYAtprWCDT3vwPansXq34nZ8Ny+EBRdC82ngkXAx1eQdEOIIsG2bfD5PLBZzl3YJMS0N7oM/roNkD/iroGYOKAVaQ3QPvPw9qGqGv7geamZP9dke16Z+UEKIY5DpEnccB9u2yWazU31KQhxqcB889/9APgV1J0C4sRisofhvuLF4fz5V3G5w5Exj4siTgC3EEZBOp7FtG4/Hg1JKAraYfgp2sWWtrGJgHk24sbjdH9cV9xNTQgK2EEeAGcP2er1YliVj2GL66Xmn2A1+uGBthBsh2V3crwyTUQ8bikVDLrjgAk499VQWLFjA1772tVHrXD/00EM0Nze7dbEP9/wjueaaa3j00UfHtW+5ZAxbiCPALOtSSqG1HrLMS4hpYfvTxTHrcviqoO0ZaFky5l0mox52Op3miiuu4L777uOSSy4hlUpx1VVX8e1vf5ubb755xP0+8YlPcO+999Ld3c3pp5/OFVdcwcyZM8f8u022inx7KKV+pJTqVkq9NcLjSin1HaXUdqXURqXUmZU4rhDTVTabdSshFQoFCoUCti1diWKacBzo3QqhhvL2CzdCz5bi/hNQ6XrYP/3pT1m9ejWXXHJJ8TTDYe69917+9V//dUznM2PGDBYsWMC7777L66+/zvve9z7OOuss1q5dS0dHBwA/+MEPWLVqFcuXL+eqq64aNt3wV77yFa655hocx+G2225j8eLFLFu2jH/8x38s6/UZSaUu9x8CLh3l8cuAU/b/XAfcV6HjCjEtmYAdCARQSmHbtnSLi+mjsP9vsdwa7UoB+sD+Y3BwLvGf//znbN++vaL1sDdt2sRZZ5015L4FCxaQTqfHVJN6x44d7Nixg3nz5nHjjTfy6KOP8vrrr/M3f/M33HnnnQB85CMf4bXXXuONN95g0aJF/PCHPxzyHF/+8pfp7u7mwQcfJBqN8thjj7Fp0yY2btzIXXfdNebfZTQV6RLXWj+nlDpxlE2uBH6siznv/qiUqlNKzdJad1Ti+EJMN6WTzDwejwRsMb149ucN17q8oK01oA7sPwaTUQ97pLreh6uV8fOf/5wXXniBQCDA97//fXp6enjrrbe4+OKLASgUCsyaNQuAt956i7vuuotoNEoikWDt2rXu83z961/nnHPO4f777wegpqaGYDDItddeywc/+EEuv/zyMf++o5msAbVWoLTGWPv++w6hlLpOKbVeKbW+p6dnUk5OiErL5Yo5mU0rwXEcmSkupg/LgqZTId1f3n6pvmIylQnOxSith32w0nrYAHfeeScbNmxgcHBwxOc7/fTTObhQ1I4dO2hqaqKurm7E/T7xiU+wYcMGXnnlFf7yL/8SrTWnn346GzZsYMOGDbz55pv87ne/A4qTy+69917efPNNvva1rw2pD7Bq1Spef/11+vuLr6fX6+XVV1/lqquu4vHHH+fSS0frgB67yQrYw13CDXvpo7W+X2u9Umu90pRaE+JoY1rTPp8Py7JwHEcKgIjp5eSLIJcsb598spj5bIIqXQ/76quv5oUXXuDpp58Git3wN910E//0T/9U1nktXLiQnp4eXn75ZaA4edTUuY7H48yaNYt8Ps8jjzwyZL9LL72U2267jQ9+8IPE43ESiQSxWIwPfOAD/Md//EfFUqROVsBuB+aW3J4DyAp8ccwyX0KRSASvtzjyJAFbTCvNpxUzmKX6xrZ9qg+qZhT3K8Nk1MMOhUL8+te/5hvf+AannnoqTU1NrF69mquvvrqsc/X7/Tz66 “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Delineate the ECG signal and visualizing all peaks of ECG complexesn”, “_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type=’peaks’)” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “The ‘peak’ method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.n”, “n”, “On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument show_type to specify the information you would like plot.n”, “n”, “Let’s visualize them below:” ] }, { “cell_type”: “code”, “execution_count”: 10, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzde3Bkd33n/ffvnL53S926tEYjae4jj2dsbIPHJAXr4CULmMvGT6pSKQhhc4HyA8VlU/VAMAXZTQVSRZ7NPmETh1CGJSSVJ2sqbBJIYqAWkg0PAQJjsD0zHns847np3pL6fj/n/J4/NL/jnrFGo1a3bqPvq0o1UveRztFI6s/53b4/pbVGCCGEEFubtdkXIIQQQoibk8AWQgghtgEJbCGEEGIbkMAWQgghtgEJbCGEEGIbkMAWQgghtoGuBLZS6otKqTml1KkbPP9OpdQzV9++p5S6uxvnFUIIIXaKbrWwvwQ8uMLzF4DXaa3vAj4JPNal8wohhBA7QqAbX0Rr/R2l1P4Vnv9ey4c/AMZW83UHBwf1/v03/LJCCCHELeXJJ5+c11qnl3uuK4HdpncDX7/Rk0qph4GHAfbu3cuJEyc26rqEEEKITaWUunSj5zZ00plS6t+yFNgfvdExWuvHtNbHtdbH0+llbzKEEEKIHWfDWthKqbuALwBv1lovbNR5hRBCiFvBhrSwlVJ7gb8G3qW1PrsR5xRCCCFuJV1pYSul/gfwADColJoA/jMQBNBafw74T8AA8FmlFICjtT7ejXMLIYRY0mw2mZiYoFarbfaliJuIRCKMjY0RDAZX/TndmiX+jps8/x7gPd04lxBCiOVNTEzQ09PD/v37udo4uinP0zRcj5BtYVmr+xzRGa01CwsLTExMcODAgVV/3mbMEhdCCLEOarXaqsLacT1emCvxz2cznJsr+Y+PDyX4mdvSjA8lCNhSCHO9KKUYGBggk8m09XkS2EIIcQu5WVjP5Gv86b9cYL5UJxYKMJKMoJRCa81EtsoX/r8XGUyE+bXXHmA4Gdmgq955VtsD0kpuoYQQYoeYydd49B9foNpwGeuL0R8P+cGhlKI/HmKsL0a14fLoP77ATF7GwrcSCWwhhNgBHNfjT//lApZS9MVDKx7bFw9hKcWf/ssFHNfboCsUNyOBLcQ6qtfreJ684InN98JciflS/aZhbfTFQ8yX6rzQMsa9GrZtc8899/hvn/70p4GlGeyPPPII4+Pj3Hnnnbz61a/m619fKnpZKpV43/vex6FDh3jlK1/Jvffey+c///kbnuPixYtEo1Huuecejh07xnvf+94V/85Onz7N61//em677TbGx8f55Cc/ida6re/rZr70pS8xNTXV1a95PQlsIdaJ4zjMzMyQzWY3+1KE4J/PZoiF2pu2FAsF+M7Z9iZGRaNRnnrqKf/tkUceAeC3fuu3mJ6e5tSpU5w6dYq/+7u/o1gsAvCe97yHvr4+XnjhBX7yk5/wjW98g8XFxRXPc+jQIZ566imeeeYZnn32Wf72b/922eOq1So/93M/xyOPPMLZs2d5+umn+d73vsdnP/vZtr6vm5HAFmIbK5fLXLx4kYmJic2+FLHDeZ7m3FyJvtjq1/wC9MWCvDBXwvM6a41WKhU+//nP80d/9EeEw2EAdu3axS/+4i9y/vx5fvjDH/KpT30Ky1qKpHQ6zUc/esMK1tcIBAK85jWv4dy5c8s+/5d/+Ze89rWv5Y1vfCMAsViMRx991G/5//Zv/za//uu/zgMPPMDBgwf5wz/8Q2Dp7/etb30rd999N3feeSdf/vKXAXjyySd53etex7333sub3vQmpqen+cpXvsKJEyd45zvfyT333EO1WuWRRx7h2LFj3HXXXXz4wx9e+39e6/fala8ihHiZxcVFFhYWKJVK3HXXXWuaFSpENzSujkO3+ztojm+4HhHLXtXnVKtV7rnnHv/jj33sYxw9epS9e/fS29v7suNPnz7N3Xff7Yd1uyqVCt/+9rf5nd/5nWWfP336NPfee+81jx06dIhSqUShUADgueee45/+6Z8oFoscOXKE973vfXzjG99gZGSEf/iHfwAgn8/TbDb54Ac/yFe/+lXS6TRf/vKX+fjHP84Xv/hFHn30UX7/93+f48ePs7i4yN/8zd/w3HPPoZQil8ut6Xu7ngS2EOukUCjQaDRwHAfXdQkE5M9NbI7Q1TXVWuu2QtuM84baWJNtusRbPfPMM6v+/N/93d/lr/7qr5ibm1uxi/n8+fPcc889KKV46KGHePOb37zscSt9z+bxt771rYTDYcLhMENDQ8zOzvKKV7yCD3/4w3z0ox/lbW97G/fff7/fnf+GN7wBANd12b1798u+bm9vL5FIhPe85z289a1v5W1ve9uqv/+VyCuIEOukVCpRr9dRSlEul0kmk5t9SWKHsizF4aEEk9kq/aucdAaQrTQZH0p0XAHt8OHDXL58mWKxSE9PzzXPHTt2jKeffhrP87Asi49//ON8/OMfJ5FIrPg1zRj2zdxxxx185zvfueaxF198kUQi4V+L6aaHpUlzjuNw22238eSTT/LEE0/wsY99jDe+8Y38/M//PHfccQff//73VzxnIBDghz/8Id/+9rd5/PHHefTRR/nHf/zHm17rzcgYthDrpFgsUq/XaTabVCqVzb4cscO97rY0lYbT1udUGg4/c1vn2xzHYjHe/e5386EPfYhGowHA9PQ0f/EXf8Hhw4c5fvw4n/jEJ3BdF1iq2NatWdzvfOc7+e53v8u3vvUtYKnL/kMf+hC/+Zu/ueLnTU1NEYvF+OVf/mU+/OEP8+Mf/5gjR46QyWT8wG42m5w+fRqAnp4efxJdqVQin8/zlre8hc985jOrurFYDWlhC7FOKpUKSilc15XAFptufCjBYCJMttxY1dKubLnBYCLM+NDKLd3rXT+G/eCDD/LpT3+aT33qU3ziE5/g2LFjRCIR4vG4P+78hS98gY985CMcPnyY/v5+otEov/d7v9feN3gD0WiUr371q3zwgx/k/e9/P67r8q53vYsPfOADK37eyZMn+chHPoJlWQSDQf7kT/6EUCjEV77yFT70oQ+Rz+dxHIff+I3f4I477uBXf/VXee9730s0GuXrX/86Dz30kH/j8Qd/8Add+V5Ut9eiddPx48f1iRMnNvsyhGib1prHH3+cZrNJrVbjvvvu45WvfOVmX5a4xZ05c4ajR4/e8HlT6exmxVOy5Qae1nzg9eNSnnQdLffzUko9eaPdLKVLXIh14HkezWbTHxuT7Q7FVjCcjPCB148TDdlMZCsslht+17PWmsVyg4lshWjIlrDegqRLXIh10Gw2/Uk0Simq1epmX5IQwFJof+RNR3hhrsR3zmauqWS21XbrOnnyJO9617uueSwcDvOv//qvHR27XUlgC7EOarUajuOglMLzPH+ijRDrbTVLtwK2xdHdvRzd3bul98N+xSteseoJW+0cuxWsZTh682+hhLgF1Wo1PM/z3ySwxUaIRCIsLCy0FQaWpYgE7S0X1rcyrTULCwtEIu0NOUgLW4h14DgOzWaTfD7vL+0SYr2NjY0xMTFBJtNe/W+x8SKRCGNjY219jgS2EOvAdIkHAgEcx6Fer2/2JYkdIBgMcuDAgc2+DLFOpEtciHVQrVZxHAfbtlFKSWALITomgS3EOjBj2IFAAMuyZAxbCNExCWwh1oFZ1qWUQmvtfyyEEGvVlcBWSn1RKTWnlDp1g+eVUuoPlVLnlFLPKKVe1Y3zCrFV1et1f3mN67q4rovjtFfHWQghWnWrhf0l4MEVnn8zMH717WHgT7p0XiG2JBPY4XAYpRSO40i3uBCiI10JbK31d4DFFQ55CPhzveQHQEop9fJNRIW4RbROMrNtW9ZiCyE6tlFj2KPAlZaPJ64+9jJKqYeVUieUUidkLaHYrhqNpRrNlrX0J+Z5nswUF0J0ZKMCe7kSOsuW4tFaP6a1Pq61Pp5Od74PqxCbwbSmg8EglmXheZ5sACKE6MhGBfYEsKfl4zFgaoPOLcSGM4GdSCQIBJbqE0lgCyE6sVGB/TXgP1ydLf7TQF5rPb1B5xZiw5nAjkQiBAIBf7tNIYRYq66UJlVK/Q/gAWBQKTUB/GcgCKC1/hzwBPAW4BxQAX6tG+cVYqtyXReAeDxOMBgEkC02hRAd6Upga63fcZPnNfD+bpxLiO3ArLmORqP+jjyVSmUzL0kIsc1JpTMh1oGpchYOhwmHw2itZZa4EKIjEthCdJnneX6XeDgcJh6PAzLpTAjRGQlsIbrMcRw8z8OyLEKhENFoFEBa2EKIjkhgC9FlzWbTryMeDAaJxWKABLYQojMS2EJ0meM4aK2xbZtAIEAikfAfF0KItZLAFqLLXNdlaWEEBAIBIpEISimpJS6E6IgEthBd1mg08DyPQCCAbdv+GLa0sIUQnZDAFqLLzGxwy7KwLMtvYZuZ40IIsRYS2EJ0mdmpy7Ztfy02IIEthOiIBLYQXWZa2Cawg8GgtLCFEB2TwBaiy8wmH7Zt+/8qpfyJaEIIsRYS2EJ0mekSN5t+KKVQSuF53iZfmRBiO5PAFqLLzPItE9iw1MqWFrYQohMS2EJ0mRnDNpPNYGnGuNZaWtlCiDWTwBaiy0wLOxQK+Y+ZFrasxRZCrJUEthBdZiadmX2wYamF3fqcEEK0SwJbiC4zodw6hh0IBNBaU6lUNuuyhBDbnAS2EF1mArt1DDsQCABQrVY35ZqEENufBLYQXWbGqVsD24xnSwtbCLFWEthCdNlyXeLm/XK5vCnXJITY/iSwhegy08I2u3TBS4EtXeJCiLWSwBaiyzzPQyl1zbIuE9hmjbYQQrSrK4GtlHpQKfW8UuqcUuqRZZ5PKqX+Tin1tFLqtFLq17pxXiG2Itd1/U0/DLPESwJbCLFWHQe2UsoG/hh4M “text/plain”: [ “
” ] }, “metadata”: { “needs_background”: “light”

78 Chapter 5. Examples NeuroKit2, Release 0.0.39

}, “output_type”: “display_data” } ], “source”: [ “# Delineate the ECG signal and visualizing all P-peaks boundariesn”, “signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type=’bounds_P’)” ] }, { “cell_type”: “code”, “execution_count”: 11, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzde3Sc913v+/fveeY+I2l0GVmW5Iscu4kdJ3EbO5RmNyShl5RyaM8qcFraclpasstqKGetw6bhFNhr7YZFezaUbmhLSUMpPWx2gQXr0MMOG3aB3ltaN3ViO04cx45tWbfRZe7X53l+5w/592RsS7JGM7p/X2t5WdI8o+fRbT7P7/b9Ka01QgghhNjYrPW+ACGEEELcnAS2EEIIsQlIYAshhBCbgAS2EEIIsQlIYAshhBCbgAS2EEIIsQm0JbCVUp9XSk0ppU4t8vg7lVLPXP33baXUXe04rxBCCLFdtKuF/QXgoSUevwD8mNb6TuCjwONtOq8QQgixLQTa8Um01l9XSu1d4vFvN7z7XWB4OZ+3r69P79276KcVQgghtpQf/OAH01rr1EKPtSWwm/Q+4B8We1Ap9TDwMMDu3bs5fvz4Wl2XEEIIsa6UUhcXe2xNJ50ppR5gPrA/vNgxWuvHtdZHtdZHU6kFbzKEEEKIbWfNWthKqTuBJ4A3aa1n1uq8QgghxFawJi1spdRu4G+Bd2utz67FOYUQQoitpC0tbKXUfwPuB/qUUqPAfwSCAFrrzwK/BfQCn1FKATha66PtOLcQQoh59Xqd0dFRKpXKel+KuIlIJMLw8DDBYHDZz2nXLPF33OTx9wPvb8e5hBBCLGx0dJSOjg727t3L1cbRTXmepuZ6hGwLy1rec0RrtNbMzMwwOjrKyMjIsp+3HrPEhRBCrIJKpbKssHZcjxemCnztbJpzUwX/4wf6E9z3ihQH+hMEbCmEuVqUUvT29pJOp5t6ngS2EEJsITcL64lshT/91gWmC1VioQCDXRGUUmitGZ0r88Q3ztOXCPPee0cY6Iqs0VVvP8vtAWkkt1BCCLFNTGQrfOpfXqBccxnujtETD/nBoZSiJx5iuDtGuebyqX95gYmsjIVvJBLYQgixDTiux59+6wKWUnTHQ0se2x0PYSnFn37rAo7rrdEVipuRwBZiFVWrVTxPXvDE+nthqsB0oXrTsDa64yGmC1VeaBjjXg7btjly5Ij/72Mf+xgwP4P90Ucf5cCBAxw+fJh77rmHf/iH+aKXhUKBX/qlX+KWW27hla98JXfffTef+9znFvz8J0+e9D93T08PIyMjHDlyhNe97nWLXtPp06d58MEHecUrXsGBAwf46Ec/itYamP8bfd3rXseRI0f4y7/8S77xjW9w++23c+TIEcrlclNf+yc/+UlKpVJTz2mGjGELsUocx2FiYoJEIkFvb+96X47Y5r52Nk0s1NxLfiwU4Otn0xzc2bns50SjUU6cOHHDx3/zN3+T8fFxTp06RTgcZnJykq997WsAvP/972ffvn288MILWJZFOp3m85///IKf/4477vA//3ve8x5+8id/kp/+6Z9e9HrK5TI/9VM/xR/90R/xhje8gVKpxNve9jY+85nP8MEPfpAf/vCH1Ot1/3N+4AMf4Fd/9Vd573vfu+yv2fjkJz/Ju971LmKxWNPPXQ5pYQuxSorFIi+99BKjo6PrfSlim/M8zbmpAt2x5a/5BeiOBXlhqoDn6ZbOXyqV+NznPscf/uEfEg6HAdixYwc/+7M/y4svvsj3vvc9HnvsMSxrPpJSqRQf/vCiFayb8hd/8Rfce++9vOENbwAgFovxqU99io997GNMTU3xrne9ixMnTnDkyBH++I//mL/6q7/iP/2n/8Q73/lOxsfHue+++zhy5AiHDx/mG9/4BgD/9E//xI/+6I/yqle9ip/5mZ+hUCjwB3/wB4yNjfHAAw/wwAMP4Lou73nPezh8+DB33HEHv//7v9/y1yItbCFWyezsLDMzMxQKBe68884VzQoVoh1qV8ehm/0dNMfXXI+IZS/rOeVymSNHjvjv//qv/zoHDx5k9+7ddHbe2FI/ffo0d911lx/W7Xb69Gnuvvvuaz52yy23UCgUiEQiPPHEE/zu7/4uf//3fw/Ad77zHb/V/nu/93u88Y1v5CMf+Qiu61IqlZienuaxxx7jK1/5CvF4nI9//ON84hOf4Ld+67f4xCc+wb/+67/S19fHD37wA65cucKpU6cAyGQyLX8tEthCrJJcLketVsNxHFzXJRCQPzexPkJX11RrrZsKbTPOG2piTfZCXeLPPPPMsp//27/92/z1X/81U1NTjI2NLft5i1nqa77Z9+LYsWP8wi/8AvV6nbe+9a0cOXKEr33tazz77LPce++9ANRqNX70R3/0hufu27eP8+fP88u//Mu8+c1v9lv4rZAucSFWSaFQoFqtUqlUKBaL6305YhuzLMX+/gRzpXpTz5sr1TnQn2i5Atr+/fu5dOkS+Xz+hscOHTrE008/7U/O/MhHPsKJEyfI5XItndO4/fbbb9im+fz58yQSCTo6OpZ87n333cfXv/51hoaGePe7380Xv/hFtNa8/vWv58SJE5w4cYJnn32WP/mTP7nhud3d3Tz99NPcf//9fPrTn+b972+92KcEthCrJJ/PU61WqdfrqzpzVIjl+LFXpCjVnKaeU6o53PeK1rc5jsVivO997+NDH/oQtVoNgPHxcf78z/+c/fv3c/ToUX7jN34D13WB+YptpnXfqne+851885vf5Ctf+Qow32X/oQ99iF/7tV+76XMvXrxIf38/v/iLv8j73vc+nnrqKV796lfzrW99i3PnzgHz4/Nnz87vadXR0eHflExPT+N5Hm9729v46Ec/ylNPPdXy1yKBLcQqKZVKKKX8sS8h1tOB/gR9iTBzxdqyjp8r1uhLhDnQn2jqPGYM2/x79NFHAXjsscdIpVIcOnSIw4cP89a3vpVUav5m4IknnmBmZob9+/dz991387rXvY6Pf/zjzX2Bi4hGo/zd3/0djz32GLfeeit33HEHx44d45FHHrnpc7/61a9y5MgRXvnKV/I3f/M3/Mqv/AqpVIovfOELvOMd7+DOO+/k1a9+Nc899xwADz/8MG9605t44IEHuHLlCvfffz9HjhzhPe95D7/zO7/T8tei2nUXsxqOHj2qr+/KEGIz0FrzpS99iXq9TqVS4dixY7zyla9c78sSW9yZM2c4ePDgoo+bSmc3K54yV6zhac0jDx6Q8qSraKGfl1LqB4vtZiktbCFWged51Ot1fwmLbHcoNoKBrgiPPHiAaMhmdK7EbLHmdz1rrZkt1hidKxEN2RLWG5BMWxViFdTrdTzPw7IslFJNV0wSYrUMdEX4D2+8lRemCnz9bPqaSmYbbbeukydP8u53v/uaj4XDYf7t3/6tpWM3KwlsIVZBpVLBcRyUUnie50+0EWK1LWfpVsC2OLizk4M7Ozf0ftiNVc3aeexGsJLh6PW/hRJiC6pUKnie5/+TwBZrIRKJMDMz01QYWJYiErQ3XFhvZVprZmZmiESaG3KQFrYQq8BxHOr1Otls1l/aJcRqGx4eZnR0lHQ6vd6XIm4iEokwPDzc1HMksIVYBaZLPBAI4DgO1Wp1vS9JbAPBYJCRkZH1vgyxSqRLXIhVUC6XcRwH27ZRSklgCyFaJoEtxCowY9iBQADLsmQMWwjRMglsIVaBWdallEJr7b8vhBAr1ZbAVkp9Xik1pZQ6tcjjSin1B0qpc0qpZ5RSr2rHeYXYqKrVqr+8xnVdXNfFcZqr4yyEEI3a1cL+AvDQEo+/CThw9d/DwB+16bxCbEgmsMPhMEopHMeRbnEhREvaEtha668Ds0sc8hbgi3red4GkUmpnO84txEbUOMnMtm1Ziy2EaNlajWEPAZcb3h+9+rEbKKUeVkodV0odl7WEYrOq1eZrNFvW/J+Y53kyU1wI0ZK1CuyFSugsWIpHa/241vqo1vqo2XpNiM3GtKaDwSCWZeF5nmwAIoRoyVoF9iiwq+H9YWBsjc4txJozgZ1IJAgE5usTSWALIVqxVoH9ZeDnr84WfzWQ1VqPr9G5hVhzJrAjkQiBQMDfblMIIVaqLaVJlVL/Dbgf6FNKjQL/EQgCaK0/CzwJ/ARwDigB723HeYXYqFzXBSAejxMMBgFki00hREvaEtha63fc5HENfLAd5xJiMzBrrqPRqL8jT6lUWs9LEkJsclLpTIhVYKqchcNhwuEwWmuZJS6EaIkEthBt5nme3yUeDoeJx+OATDoTQrRGAluINnMcB8/zsCyLUChENBoFkBa2EKIlEthCtFm9XvfriAeDQWKxGCCBLYRojQS2EG3mOA5aa2zbJhAIkEgk/I8LIcRKSWAL0Wau6zK/MAICgQCRSASllNQSF0K0RAJbiDar1Wp4nkcgEMC2bX8MW1rYQohWSGAL0WZmNrhlWViW5bewzcxxIYRYCQlsIdrM7NRl27a/FhuQwBZCtEQCW4g2My1sE9jBYFBa2EKIlklgC9FmZpMP27b9/5VS/kQ0IYRYCQlsIdrMdImbTT+UUiil8Dxvna9MCLGZSWAL0WZm+ZYJbJhvZUsLWwjRCglsIdrMjGGbyWYwP2Ncay2tbCHEiklgC9FmpoUdCoX8j5kWtqzFFkKslAS2EG1mJp2ZfbBhvoXd+JgQQjRLAluINjOh3DiGHQgE0FpTKpXW67KEEJucBLYQbWYCu3EMOxAIAFAul9flmoQQm58EthBtZsapGwPbjGdLC1sIsVIS2EK02UJd4ubtYrG4LtckhNj8JLCFaDPTwja7dMHLgS1d4kKIlZLAFqLNPM9DKXXNsi4T2GaNthBCNKstga2Uekgp9bxS6pxS6tEFHu9SSv1/SqmnlVKnlVLvbcd5hdiIXNf1N/0wzBIvCWwhxEq1HNh “text/plain”: [ “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Delineate the ECG signal and visualizing all T-peaks boundariesn”, “signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, show=True, show_type=’bounds_T’)” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “Both the onsets of P-peaks and the offsets of T-peaks appears to have been cor- rectly identified here. This information will be used to delineate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2. ecg_phase>).n”, “n”, “Let’s next take a look at the continuous wavelet method.” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “### Continuous Wavelet Method (CWT)” ] }, { “cell_type”: “code”, “execution_count”: 12, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXxV9Z3/8df33D0bJBBkk8omGgIGActSEZRFxFamaGcsLtSFoVW08xioODidDtKpdiwyFpdaaxlbO2K12lq1+hOpIEItILKJEEAxJITs293P+f7+uJxrgJDcS242+Dwfjzw0955zz7kB8rnf9a201gghhBCiczM6+gaEEEII0TIp2EIIIUQXIAVbCCGE6AKkYAshhBBdgBRsIYQQoguQgi2EEEJ0ASkp2EqpZ5VSx5RSu07z/Fyl1I7jXx8opS5JxXWFEEKIc0WqWtirgaubef4QcIXWeiTwIPB0iq4rhBBCnBOcqXgRrfV6pdQFzTz/QaNvNwP9E3ndnj176gsuOO3LCiGEEGeVrVu3lmutc5t6LiUFO0m3A2+e7kml1HxgPsCAAQPYsmVLe92XEEII0aGUUp+f7rl2nXSmlJpCrGDfd7pjtNZPa63HaK3H5OY2+SFDCCGEOOe0WwtbKTUSeAaYqbWuaK/rCiGEEGeDdmlhK6UGAH8AbtZa72uPawohhBBnk5S0sJVS/wdMBnoqpYqA/wBcAFrrp4AfAj2AJ5RSAFGt9ZhUXFsIIQREIhGKiooIBoMdfSsiAV6vl/79++NyuRI+J1WzxG9s4fk7gDtScS0hhBCnKioqIjMzkwsuuIDjDaMWWZYmbFq4HQaGkdg5ovW01lRUVFBUVMTAgQMTPq8jZokLIYRIsWAwmFCxjpoW+4/V896+MgqP1ccfH9org0kX5jK0VwZOh2yC2ZaUUvTo0YOysrKkzpOCLYQQZ4mWivXRmiC/3niI8voQaW4nfbt5UUqhtaaoKsAzGw7SM8PDdyYOpHc3bzvd9bkp0V6QxuRjlBBCnAOO1gRZ9e5+AmGT/tlp5KS740VDKUVOupv+2WkEwiar3t3P0RoZC+9spGALIcRZLmpa/HrjIQylyE53N3tsdrobQyl+vfEQUdNqpzsUiZCCLUQbCoVCWJb80hMda/+xesrrQy0Wa1t2upvy+hD7G41xJ8LhcFBQUBD/euihh4DYDPYlS5YwdOhQ8vPzueyyy3jzzdiGl/X19Xz3u99l8ODBjBo1itGjR/PLX/7ytNf47LPP8Pl8FBQUkJeXx4IFC077byyZY5uzevVq7r777qTPSzUZwxaijUSjUY4ePUpGRgY9evTo6NsR57D39pWR5k7u132a28n6fWVc3Ccr4XN8Ph/bt28/5fF///d/p6SkhF27duHxeCgtLeW9994D4I477mDQoEHs378fwzAoKyvj2WefbfY6gwcPZvv27USjUa688kpeffVVvvnNb7b62M5OWthCtJGGhgY+++wzioqKOvpWxDnMsjSFx+rJTkt8vS9AdpqL/cfqsSzdquv7/X5++ctf8vOf/xyPxwPAeeedx7e+9S0OHDjAhx9+yPLlyzGMWDnKzc3lvvtOu3v1CZxOJxMmTKCwsDCpY8vKypgzZw5jx45l7NixbNy4EYAPP/yQCRMmMGrUKCZMmMCnn356yuu8/vrrjB8/nvLycn7/+9+Tn5/PJZdcwqRJkxL9kZwxaWEL0UYqKyupqKigvr6ekSNHntGsUCFaK3x8HDrZv3/28WHTwms4EjonEAhQUFAQ//7+++/n4osvZsCAAWRlndpS3717N5dcckm8WCfL7/ezdu1ali1bltSx9957L//yL//C1772NQ4fPsyMGTP45JNPuOiii1i/fj1Op5N33nmHf/u3f+Pll1+Ov8Yrr7zCihUreOONN8jOzmbZsmW89dZb9OvXj+rq6jN6D8mQgi1EG6mtrSUcDhONRjFNE6dT/rmJ9uc+vqZaa51U0dZan3B+IprqEt+xY0fC5//4xz/m97//PceOHaO4uPi0xx04cICCggKUUlx33XXMnDkzqWNvvfVW9uzZEz+mtraWuro6ampquPXWW9m/fz9KKSKRSPyYdevWsWXLFt5+++34h4+JEycyb948vvWtb7VLN7v8BhGijdTX1xMKhVBK0dDQQLdu3Tr6lsQ5yDAUQ3plcKQqQE6Ck84AqvwRhvbKaPUOaEOGDOHw4cPU1dWRmZl5wnN5eXl8/PHHWJaFYRgsXbqUpUuXkpGR0exr2uPSiWjqWMuy2LRpEz6f74THFy5cyJQpU3jllVf47LPPmDx5cvy5QYMGcfDgQfbt28eYMbGdtZ966in+9re/8frrr1NQUMD27dvbdL6KjGEL0Ubq6uoIhUJEIhH8fn9H3444h11xYS7+cDSpc/zhKJMubH3EcVpaGrfffjv33HMP4XAYgJKSEn77298yZMgQxowZwwMPPIBpmkBsxza7dd9Wpk+fzqpVq+Lf2wW9pqaGfv36AbGZ4Y195Stf4Q9/+AO33HILu3fvBmKt969+9assW7aMnj178sUXX7TpfUvBFqKN+P1+lFKYpikFW3Soob0y6JnhoaohnNDxVQ1hemZ4GNqr+ZbuyewxbPtryZIlACxfvpzc3Fzy8vLIz89n9uzZ5ObGPgw888wzVFRUMGTIEEaPHs3UqVN5+OGHk3uDSXrsscfYsmULI0eOJC8vj6eeegqAH/zgB9x///1MnDgx/gGisWHDhvH8889zww03cODAARYvXsyIESPIz89n0qRJXHLJJW1636qtP8m0xpgxY/SWLVs6+jaESJrWmhdeeIFIJEIwGGTs2LGMGjWqo29LnMU++eQTLr744tM+b+901tLmKVUNYSytufvKobI9aRtr6s9MKbX1dGmW0sIWog1YlkUkEokvY5HIQ9HRenfzcveVQ/G5HRRV+alsCMe7nrXWVDaEKary43M7pFh3UjLpTIg2EIlE4hNplFIEAoGOviUh6N3Ny+IZw9h/rJ71+8pO2Mmss6V17dy5k5tvvvmExzweD3/7299adWxXJgVbiDYQDAaJRqMopbAsKz7ZRoiO5nQYXNwni4v7ZGGZJuFwELfbi+FIbK11exkxYkTCM8GTObYrk4ItRBsIBoNYlhX/koItOg0zCmV7ofAdjPJ9xDu+c4fB4Ksg9yJwSGnojORPRYg2EI1GiUQi1NTUxJd2CdHhaoth81PQUAbudMjqD0qB1lD9BWx6HNJzYdwCyOrb0XcrTtLxAxVCnIXsLnHLsohGo4RCoY6+JXGuqy2G9Y9AxA/dB0Baj1ixhth/03rEHo/4Y8fVnn6nMdExpGAL0QYCgQDRaBSHw4FSSgq26FhmNNayVkasMDcnrUfsuM1Pxc4TnYYUbCHagD2G7XQ6MQxDxrBFxyrbG+sGb6lY29J6QMOx2HlJaOs87J07d8ZfOycnh4EDB1JQUMDUqVObPF7ysIUQLbKXdSml0FqfsMxLiHZX+E5szDoZrnQ4sBZ65yd8SlvnYTeeDT5v3jyuvfZarr/++mbvSfKwT6KUelYpdUwptes0zyul1GNKqUKl1A6l1KWpuK4QnVUoFIqnI5mmiWmaRKPSvSg6gGVB+T7w5SR3XloPKPs0dn4rtGUedjIkD/tLq4FVwHOneX4mMPT411eBJ4//V4izkl2wPR4PSimi0SjhcBi3O/G0JCFSwjw+HJNsHrtSgI6dbyS261l752EnQ/Kwj9Nar1dKXdDMIdcBz+nYPniblVLdlVJ9tNYlqbi+EJ1N40lmDocjXrCFaHeO4x8StU6uaGsNqC/PT0B75WEnQ/Kwk9cPaJw7VnT8sVMKtlJqPjAfYMCAAe1yc0KkWjgc26fZbjlYliUzxUXHMAzoeSHUFCU+6QzAXxHbTKWVrd+2yMNOhuRhJ6+pj3VNxoRprZ/WWo/RWo+x49eE6Grs1rTL5cIwDCzLkgAQ0XGGTIVwQ3LnRBpiO5+1kuRhp057Fewi4PxG3/cHZFW+OGvZv5gyMjJwOmMdWVKwRYfJvSi2g5m/IrHj/RWQ3it2XhIkD7uL5GEfH8P+s9b6lDUASqlZwN3ANcQmmz2mtb6spdeUPGzRVf3iF7+goqKCSZMmsXv3bsrLy/n617/OyJEjO/rWxFmqpTzs+E5nLW2e4q8AbcGkRbI9aRtLNg87JWPYSqn/AyYDPZVSRcB/AC4ArfVTwBvEinUh4Ae+k4rrCtFZ2Z/O09PTcblcABKxKTpWVt9YEd78FFQfBlfal9uTah0r1JGGWMta9hLvlFI1S/zGFp7XwF2puJYQXYG95trn8+H1xpbE+P3+jrwlIWJF+KofxnYwO7A2ts6a47PBO1lal+Rhn6rj/1SEOAvZu5x5PB48Hg9aa5klLjoHhzO2e1nvfCwzSiTSgMuVjtEJinRjkod9qs71JyTEWcCyrHiXuMfjIT09tiWkTDoTnUHUinKg+gAbj2zkUM2h+OODug9iQt8JDO4+GKchpaEzkj8VIVLMjtU0DAO32x1f6yktbNHRShtKef6T56kMVuJz+uid3ju+331xfTG/2fMbcrw5zL14Lueln9fRtytOIkkEQqRYJBKJ7yPucrlIS0sDpGCLjlXaUMrTO54mEA3QN6Mv2d5s1PGdz5RSZHuz6ZvRl0A0wNM7nqa0obSD71icTAq2ECkWjUbRWuNwOHA6nfFdmyT8Q3SUqBXl+U+ex1AG2d7sZo/N9mZjKIPnP3meqCV/ZzsTKdhCpJhpmvGdmpxOJ16vF6WU7CUuOsyB6gNUBitbLNa2bG82FcEKDlQfSOo6bZ2HDadmXN9yyy0n7PndlPfff5/LLruMiy66iIsuuoinn346qfeViJUrV7b5ShAZwxYixcLhMJZl4XQ6cTgc8TFsaWGLjrLxyEZ8Tl/LBzaS5kzjg+IPGJYzLOF “text/plain”: [

5.15. Extract and Visualize Individual Heartbeats 79 NeuroKit2, Release 0.0.39

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Delineate the ECG signaln”, “signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type=’all’)” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “By specifying ‘all’ in the show_type argument, you can plot all delineated informa- tion output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let’s tease them apart!” ] }, { “cell_type”: “code”, “execution_count”: 13, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXSceX3n+/fvqX2RVFotS/IiL3hpd7cb3A1pQrMmQCYZcpPAzXIJ5ITTp3PTSW7OTKAJSeZcQs5JZubm3Ekg02m4hJubzHCHA+SSoTMkEEInQAMGmrbdbru9SrK2Ukm1r8/z/O4fpd/jsluWVaqSVJa/r3N8bFU9peexlvo8v+37U1prhBBCCNHZrK2+ACGEEELcngS2EEIIcQeQwBZCCCHuABLYQgghxB1AAlsIIYS4A0hgCyGEEHeAtgS2UuqTSql5pdTpWzz/C0qp55f/fEMpdX87ziuEEELcLdrVwv4U8LZVnr8MvF5rfR/w+8BTbTqvEEIIcVfwt+OTaK2fUUrtXeX5bzR8+CwwtpbPOzAwoPfuveWnFUIIIbaV7373uwta68GVnmtLYDfpl4G/u9WTSqlHgUcBdu/ezcmTJzfruoQQQogtpZS6eqvnNnXSmVLqjdQD+wO3OkZr/ZTW+oTW+sTg4Io3GUIIIcRdZ9Na2Eqp+4BPAG/XWqc267xCCCHEdrApLWyl1G7gc8C7tdbnN+OcQgghxHbSlha2Uuq/Am8ABpRSU8C/AwIAWusngd8D+oE/U0oB2FrrE+04txBCiNXVajWmpqYol8tbfSliWTgcZmxsjEAgsObXtGuW+M/d5vn3Ae9rx7mEEEI0Z2pqiq6uLvbu3ctyo+m2XFdTdVyCPgvLWttrxNporUmlUkxNTTE+Pr7m123FLHEhhBCbqFwurymsbcflpfk8Xzuf5MJ83nv84FCcR14xyMGhOH6fFMhslVKK/v5+kslkU6+TwBZCiLvA7cJ6NlPmL75+mYV8hWjQz0hPGKUUWmumlkp84p8vMRAP8UuvHWe4J7xJV719rbWno5HcKgkhxF1uNlPmo//4EqWqw1hvlL5Y0AsUpRR9sSBjvVFKVYeP/uNLzGZkLHwrSGALIcRdzHZc/uLrl7GUojcWXPXY3lgQSyn+4uuXsR13k65QGBLYQmygSqWC68obm+hcL83nWchXbhvWRm8syEK+wksNY9xr4fP5OH78uPfnD//wD4H6DPYnnniCgwcPcuzYMR566CH+7u/qxTDz+Ty/8iu/wv79+3nggQd41atexcc//vFbnuPKlStEIhGOHz/O0aNHeeyxx275+9fMsav51Kc+xeOPP97069ZDxrCF2CC2bTM7O0s8Hqe/v3+rL0eIFX3tfJJosLkoiAb9PHM+yZGd3Wt+TSQS4bnnnnvZ47/7u7/LzMwMp0+fJhQKMTc3x9e+9jUA3ve+97Fv3z5eeuklLMsimUzyyU9+ctXz7N+/n+eeew7btnnTm97E3/zN3/BTP/VTLR/bCaSFLcQGKRQKXLlyhampqa2+FCFW5LqaC/N5eqNrXwsM0BsN8NJ8HtfVLZ2/WCzy8Y9/nD/90z8lFAoBsGPHDt71rndx8eJFvv3tb/ORj3wEy6pH1eDgIB/4wC0rW9/A7/fz8MMPc+HChaaOTSaT/PRP/zQPPvggDz74IF//+tcB+Pa3v83DDz/MAw88wMMPP8y5c+de9nm++MUv8kM/9EMsLCzwmc98hmPHjnH//ffzyCOPrPVLsvp1tuWzCCFeZnFxkVQqRT6f57777lvXrFAhNlJ1eRy62Z9Nc3zVcQlbvjW9plQqcfz4ce/jD37wgxw5coTdu3fT3f3ylvqZM2e4//77vbBuVrFY5Ctf+Qof/vCHmzr2N37jN/jN3/xNfviHf5iJiQne+ta3cvbsWQ4fPswzzzyD3+/ny1/+Mr/927/NZz/7We9zfP7zn+eP//iPefrpp+nt7eXDH/4wX/rSlxgdHSWdTq/r/3AzCWwhNkg2m6VarWLbNo7j4PfLr5voLMHlNdVa66ZCW2t9w+vXYqUu8eeff37Nr/+DP/gDPvOZzzA/P8/09PQtj7t48SLHjx9HKcU73vEO3v72tzd17Hve8x5eeOEF75hsNksulyOTyfCe97yHl156CaUUtVrNO+arX/0qJ0+e5O///u+9m4/Xvva1vPe97+Vd73pX27rZ5R1EiA2Sz+epVCoopSgUCvT09Gz1JQlxA8tSHBiKc22pRN8aJ50BLBVrHByKt1wB7cCBA0xMTJDL5ejq6rrhuaNHj/KDH/wA13WxLIsPfehDfOhDHyIej6/6Oc249FqsdKzrunzzm98kEonc8Piv/dqv8cY3vpHPf/7zXLlyhTe84Q3ec/v27ePSpUucP3+eEyfqVbeffPJJvvWtb/HFL36R48eP89xzz7U8l0XGsIXYILlcjkqlQq1Wo1gsbvXlCLGi179ikGLVbuo1xarNI69offvjaDTKL//yL/Prv/7rVKtVAGZmZvirv/orDhw4wIkTJ/id3/kdHMcB6hXbTOt+o/zoj/4oH/3oR72PTaBnMhlGR0eB+szwRnv27OFzn/scv/iLv8iZM2eAeuv91a9+NR/+8IcZGBhgcnKy5WuTwBZigxSLRZRSOI4jgS061sGhOAPxEEuF6pqOXypUGYiHODi0ekv3ZmYM2/x54oknAPjIRz7C4OAgR48e5dixY/zkT/4kg4P1m4FPfOITpFIpDhw4wKte9Sre8pa38Ed/9EfN/Qeb9Cd/8iecPHmS++67j6NHj/Lkk08C8P73v58PfvCDvPa1r/VuIBodOnSIv/7rv+ad73wnFy9e5Ld+67e49957OXbsGI888gj3339/y9emNvpupRUnTpzQJ0+e3OrLEKJpWms+/elPU6vVKJfLPPjggzzwwANbfVniLnX27FmOHDlyy+dNpbPbFU9ZKlRxtebxNx2U8qRtsNL3RSn13VvtZiktbCE2gOu61Go1b6mKbGsoOtlwT5jH33SQSNDH1FKRxULV63rWWrNYqDK1VCQS9ElYbyGZdCbEBqjVat5kGaUUpVJpqy9JiFUN94T5rbce4qX5PM+cT95QyazTdus6deoU7373u294LBQK8a1vfaulYzudBLYQG6BcLmPbNkopXNf1JtQI0cn8PosjO7s5srMb13GoVssEg2Es39rWWm+We++9d80zwZs5ttNJYAuxAcrlMq7ren8ksMUdwbEh+SJc+DLWwnm8ju/BQ7D/zTB4GHwSG1tFvvJCbADbtqnVamQyGW9plxAdLTsNzz4JhSQEY9A9BkqB1pCehG9+DGKD8JrHoHtkq6/2rrT1gxFCbEOmS9x1XWzbplKpbPUlCXFr2Wl45j9CrQiJ3RDtr4c11P+O9tcfrxXrx2VvXWlMbBwJbCE2QKlUwrZtfD4fSikJbNG5HLveslZWPZhXE+2vH/fsk/XXiU0lgS3EBjBj2H6/H8uyZAxbdK7ki/Vu8NuFtRHth8J8/XVN2Oj9sE+dOuV97r6+PsbHxzl+/DhvectbVjxe9sMWQgDXl3UppdBa37DMS4iOcuHL9THrZgRicPErMHxszS/Z6P2wG2eDv/e97+XHf/zH+Zmf+ZlVr+mu3A9bKfVJpdS8Uur0LZ5XSqk/UUpdUEo9r5R6ZTvOK0SnqlQq3g5IjuPgOA62LV2IosO4Liych0hfc6+L9kPyXP31LdjI/bCbcbfth/0p4KPAX97i+bcDB5f/vBr4z8t/C7EtmcAOhUIopbBtm2q1SjC49h2RhNhwzvJQTbN7tSsF6PrrrbVVPdvs/bCbcVfth621fkYptXeVQ94B/KWu17p7VimVUErt1FrPtOP8QnSaxklmPp/PC2whOopv+QZS6+ZCW2tAXX/9GmzWftjNkP2wVzYKNO4tNrX82MsCWyn1KPAowO7duzfl4oRot2q1XovZtA5c15WZ4qLzWBYMvAIyU2ufdAZQTNWLqbTY+t2I/bCbIfthr2ylW7cVtwnTWj+ltT6htT5htlgT4k5jWtOBQADLsnBdVzYAEZ3pwFugWmjuNbVCvfJZi2Q/7OZsVmBPAbsaPh4DZOW92LbMm088Hsfvr3dkSWCLjjR4uF7BrJha2/HFFMSG6q9rguyH3UH7YS+PYf93rfXL5vkrpf4V8DjwY9Qnm/2J1vqh231O2Q9b3Kn+/M//nFQqxSOPPMKZM2dYWFjgJ37iJ7jvvvu2+tLEXeh2+2F7lc5uVzylmALtwiP/VsqTtkGz+2G3ZQxbKfVfgTcAA0qpKeDfAQEArfWTwNPUw/oCUAR+qR3nFaJTmTvwWCxGIBAAkC02RefqHqmH8LNPQnoCAtHr5Um1rgd1rVBvWUst8S3TrlniP3eb5zXwq+04lxB3ArPmOhKJEA7Xl70Ui8WtvCQhVtc9Am/+vXoFs4tfqa+zZnk2eIft1iX7YQsh2sZUOQuFQoRCIbTWMktcbClTyGdVPn+9etnwsXpRFKdaX7rVYRX6tsN+2OsZju6s74IQ24Drul6XeCgUIharl32USWdiq4TDYVKpVHMhYVkQCHdcWG8HWmtSqZTX+7ZW0sIWos3MtpqWZREMBr31nNLCFltlbGyMqakpksnkVl+KWBYOhxkbG2vqNRLYQrRZrVbzuh8DgQDRaBSQwBZbJxAIMD4+vtWXIVokfR1CtJlt22it8fl8+P1+rzKTbP4hhGiFBLYQbeY4jjdW6Pf7CYfDKKWklrgQoiUS2EK0WbVaxXVd/H4/Pp/PG8OWFrYQohUS2EK0mZkNblkWlmV5LeyVyhkKIcRaSWAL0WZmpy6fz+etxQYksIUQLZHAFqLNTAvbBHYgEJAWthCiZRLYQrSZ2dje5/N5fyulNnxbQCHE9iaBLUSbmS5xs+mHUgqlFK7rbvGVCSHuZBLYQrSZWb5lAhvqrWxpYQ “text/plain”: [ “
” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize P-peaks and T-peaksn”, “signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type=’peaks’)” ] }, { “cell_type”: “code”, “execution_count”: 15, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeZBcd33v/ffvnN67Z18kjUarNdZiWZaxbBxcNrbZDCaBWySpEOA+BhyXeTAkf5hgCsJNgVMX7iWEhxgw4BgXlScXErKQhwChSAhmscGyLSzJlm1Z62i2nqV7ej99+vyeP1q/45Y8Gk3PnNGMRt9XlUqa7tN9ujUz/Tm/7ftTWmuEEEIIsbxZS/0ChBBCCHF+EthCCCHERUACWwghhLgISGALIYQQFwEJbCGEEOIiIIEthBBCXAQCCWyl1MNKqTGl1IFz3P8updQzp//8Uil1VRDnFUIIIS4VQbWwHwFum+X+o8Brtda7gE8DXwvovEIIIcQlIRTEk2itH1VKbZzl/l82fPk40D+X5+3u7tYbN57zaYUQQogV5cknnxzXWvfMdF8ggd2k9wM/ONedSqm7gLsA1q9fz969ey/U6xJCCCGWlFLq+Lnuu6CTzpRSt1AP7I+e6xit9de01nu01nt6ema8yBBCCCEuOResha2U2gU8BLxZaz1xoc4rhBBCrAQXpIWtlFoP/BPwHq31CxfinEIIIcRKEkgLWyn1f4CbgW6l1CDwP4AwgNb6QeCTQBfwZaUUgKu13hPEuYUQQtRVq1UGBwcpl8tL/VLEecRiMfr7+wmHw3N+TFCzxN95nvvvBO4M4lxCCCFmNjg4SEtLCxs3buR04+i8PE/j1DwitoVlze0xYmG01kxMTDA4OMimTZvm/LilmCUuhBBiEZTL5TmFtVvzeHEsz09fSHN4LO/fPtCb4qbLexjoTRGypRDmYlFK0dXVRTqdbupxEthCCLGCnC+sR7JlvvGLo4znKyQiIfraYiil0FozOFXioZ8doTsV5b03bGJ1W+wCvepLz1x7QBrJJZQQQlwiRrJlHvjPFyk5Nfo7EnQmI35wKKXoTEbo70hQcmo88J8vMpKVsfDlRAJbCCEuAW7N4xu/OIqlFB3JyKzHdiQjWErxjV8cxa15F+gVivORwBZiEVUqFTxPPvDE0ntxLM94vnLesDY6khHG8xVebBjjngvbttm9e7f/5zOf+QxQn8F+3333MTAwwM6dO7nuuuv4wQ/qRS/z+Twf+MAHuOyyy7j66qu55ppr+PrXvz7j8+/fv99/7s7OTjZt2sTu3bt5/etff87XdPDgQW699VYuv/xyBgYG+PSnP43Wuqn3dT6PPPIIQ0NDgT7n2WQMW4hF4rouIyMjpFIpurq6lvrliEvcT19Ik4g095GfiIR49IU029e0zvkx8Xicffv2veL2P/uzP2N4eJgDBw4QjUYZHR3lpz/9KQB33nknmzdv5sUXX8SyLNLpNA8//PCMz3/llVf6z3/HHXfw1re+ld/93d895+splUr8zu/8Dl/5yld44xvfSLFY5B3veAdf/vKX+eAHPzjn93U+jzzyCDt37qSvry+w5zybtLCFWCSFQoFjx44xODi41C9FXOI8T3N4LE9HYu5rfgE6EmFeHMvjeQtrjRaLRb7+9a/z13/910SjUQBWrVrF7//+7/PSSy/x61//mvvvvx/LqkdST08PH/3oOStYN+Xv/u7vuOGGG3jjG98IQCKR4IEHHvBb/n/+53/O+973Pm6++WY2b97MF7/4RaD++3v77bdz1VVXsXPnTr797W8D8OSTT/La176Wa665hje96U0MDw/zne98h7179/Kud72L3bt3UyqVuO+++9ixYwe7du3i3nvvDeS9SAtbiEUyOTnJxMQE+XyeXbt2zWtWqBBBcE6PQzf7M2iOd2oeMcue02NKpRK7d+/2v/7Yxz7G9u3bWb9+Pa2tr2ypHzx4kKuuusoP66AdPHiQa6655ozbLrvsMvL5PNPT0wAcOnSIn/zkJ+RyObZu3coHPvABfvjDH9LX18e//du/AZDNZqlWq3zoQx/iu9/9Lj09PXz729/m4x//OA8//DAPPPAAn/vc59izZw+Tk5P88z//M4cOHUIpRSaTCeS9SGALsUimp6dxHAfXdanVaoRC8usmlkbk9JpqrXVToW3GeSNNrMmeqUv8mWeemfPj/+Iv/oJ/+Id/YGxsLJAx4dnes7n99ttvJxqNEo1G6e3tZXR0lCuvvJJ7772Xj370o7z1rW/lxhtv5MCBAxw4cIA3vOENANRqNdasWfOK521tbSUWi3HnnXdy++2389a3vnXB7wOkS1yIRZPP56lUKpTLZQqFwlK/HHEJsyzFlt4UU8VqU4+bKlYZ6E0tuALali1bOHHiBLlc7hX37dixg9/85jf+5MyPf/zj7Nu3z2/9LtQVV1zxim2ajxw5QiqVoqWlBcDvpof6pDnXdbn88st58sknufLKK/nYxz7Gpz71KbTWXHHFFezbt499+/axf/9+fvSjH73inKFQiF//+te84x3v4F/+5V+47bbbAnkvEthCLJJcLkelUqFarVIsFpf65YhL3Gsv76HouE09pui43HT5wrc5TiQSvP/97+fDH/4wjuMAMDw8zN/+7d+yZcsW9uzZwyc+8QlqtRpQr9gW1Czud73rXfz85z/nxz/+MVDvsv/whz/Mn/7pn876uKGhIRKJBO9+97u59957eeqpp9i6dSvpdJrHHnsMqM98P3jwIAAtLS3+BUk+nyebzfKWt7yFL3zhCzNOwpsP6aMTYpEUi0WUUtRqNQlsseQGelN0p6JMFZw5Le2aKjh0p6IM9KaaOs/ZY9i33XYbn/nMZ7j//vv5xCc+wY4dO4jFYiSTST71qU8B8NBDD/GRj3yELVu20NnZSTwe57Of/Wxzb/Ac4vE43/3ud/nQhz7EBz/4QWq1Gu95z3u45557Zn3c/v37+chHPoJlWYTDYb7yla8QiUT4zne+w4c//GGy2Syu6/Inf/InXHHFFdxxxx3cfffdxONxfvCDH/C2t73Nv/D4q7/6q0Deiwp6LVqQ9uzZo8/uyhDiYqC15lvf+hbVapVyucy1117L1VdfvdQvS6xwzz33HNu3bz/n/abS2fmKp0wVHDytuefWASlPuohm+n4ppZ48126W0iUuxCLwPI9qteqPjcl2h2I5WN0W455bB4hHbAanikwWHL/rWWvNZMFhcKpIPGJLWC9D0iUuxCKoVqt4nodlWSilKJVKS/2ShADqof2RN23lxbE8j76QPqOS2XLbrWv//v285z3vOeO2aDTKr371qwUde7GSwBZiEZTLZVzXRSmF53n+RBshloOQbbF9TSvb17Ti1Wo4TplIJIZlz22t9YXSWNUsyGMvVhLYQiyCcrmM53n+HwlssazUXEgfgsM/xhp/Ab/ju2crXPY66NkGtsTDciPfESEWgeu6VKtVstmsv7RLiGVheggefxAKaYgkobUflAKtIXMSHvsSJHvg+ruhdfHqYovmLf0ghRArkOkS9zwP13WpVCpL/ZKEqIf1o5+DahHa10Oiqx7WUP870VW/vVqsHze9uLtPieZIYAuxCEqlEq7rYts2SikJbLH0am69Za2sejDPJtFVP+7xB+uPE8uCBLYQi8CMYYdCISzLkjFssfTSh+rd4OcLayPRBYWx+uOacLHth12pVHj961/P7t27+fa3v83PfvYzrrjiCn/XrWZ84QtfWNQiSTKGLcQiMMu6lFJorc9Y5iXEkjj84/qYdTPCSXjpP2D1zjk/5GLbD/vpp5+mWq36z3n33Xdz77338t73vnfO79n4whe+wLvf/W4SiUTTj52LQD49lFIPK6XGlFIHznG/Ukp9USl1WCn1jFLqVUGcV4jlqlKp+LsE1Wo1arUaritdi2KJeB6MvwDxzuYel+iC9PP1xy/Act0Pe2xsjHe/+93s27eP3bt389WvfpW///u/51Of+hTvete7GB4e5qabbmL37t3s3LmTn/3sZwD86Ec/4rd+67d41atexe/93u+Rz+f54he/yNDQELfccgu33HILtVqNO+64g507d3LllVcGUp40qBb2I8ADwDfPcf+bgYHTf14NfOX030KsSCawo9EoSilc18VxHCKR89dwFiJwtdNDMs3uya4UoOuPt+ZW9exi2g87Fovx0EMP8bnPfY7vfe97ADz22GN+q/0v//IvedOb3sTHP/5xf0+A8fFx7r//fn784x+TTCb57Gc/y+c//3k++clP8vnPf56f/OQndHd38+STT3Lq1CkOHKi3Y4PYEzuQwNZaP6qU2jjLIW8DvqnrgwaPK6XalVJrtNbDQZxfiOWmcZKZ2a5PxrHFkrFPXyhq3Vxoaw2olx8/Bxfjftjncu211/K+972ParXK29/+dnbv3s1Pf/pTnn32WW644QYAHMfht37rt17x2M2bN3PkyBE+9KEPcfvtt/st/IW4UANqa4GTDV8Pnr7tFZRSdyml9iql9qbT6Qvy4oQImuPUazSbVoPneTJTXCwdy4Luy6E02dzjihP1YioLbP0u9/2wz+Wmm27i0UcfZe3atbznPe/hm9/8Jlpr3vCGN/h7Yj/77LP8zd/8zSse29HRwW9+8xtuvvlmvvSlL3HnnXcu+L1cqMCe6TJmxm3CtNZf01rv0Vrv6elZ+D6sQiwF05oOh8NYloXnebIBiFhaW14PTqG5x1QL9cpnC3Qx7ocNcPz4cXp7e/mjP/oj3v/+9/PUU09x/fXX84tf/ILDhw8D9fH5F154AThzT+zx8XE8z+Md73gHn/70p3nqqacW/F4uVGAPAusavu4HZEW+WLHMh1IqlSIUqo88SWCLJdWzrV7BrDgxt+OLE5DsrT+uCWYM2/y57777ALj//vvp6elhx44d7Ny5k7e//e2 “text/plain”: [ “

80 Chapter 5. Examples NeuroKit2, Release 0.0.39

] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize T-waves boundariesn”, “signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type=’bounds_T’)” ] }, { “cell_type”: “code”, “execution_count”: 14, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXRdZ33/+/ezzzxoHixL8mzFseM4hjhpgJtAGBNCG7o6UuAupuaGS6Bd6wYIN5R2lXQVfj9K+dEAAXIDt6tDoHSAQoBewhCGUHBG2/EQz5p1NJx53Hs/94/jZ+fYkWUd6ciS5e9rLS1bOvtob03ns5/p+yitNUIIIYRY2azlvgAhhBBCXJgEthBCCHEJkMAWQgghLgES2EIIIcQlQAJbCCGEuARIYAshhBCXgIYEtlLqIaXUhFJq/3kef6tS6tkzb79QSl3TiPMKIYQQl4tGtbC/Ctwyx+MngFdqrXcBHwe+1KDzCiGEEJcFfyM+idb6MaXUxjke/0XNu78E+ufzeTs7O/XGjef9tEIIIcSq8sQTT0xqrbtme6whgV2ndwPfPd+DSqk7gDsA1q9fz969ey/WdQkhhBDLSil16nyPXdRJZ0qpm6kG9ofPd4zW+kta6z1a6z1dXbPeZAghhBCXnYvWwlZK7QIeBG7VWk9drPMKIYQQq8FFaWErpdYD/wa8XWt95GKcUwghhFhNGtLCVkr9M/AqoFMpNQT8ORAA0Fo/AHwM6AA+r5QCsLXWexpxbiGEEFWVSoWhoSGKxeJyX4q4gHA4TH9/P4FAYN7PadQs8bdc4PH3AO9pxLmEEELMbmhoiKamJjZu3MiZxtEFua6m7LgEfRaWNb/niMXRWjM1NcXQ0BCbNm2a9/OWY5a4EEKIJVAsFucV1rbj8vxElp8cSXB0Iut9fKA7zk1XdDHQHcfvk0KYS0UpRUdHB4lEoq7nSWALIcQqcqGwHksV+crPTzCZLREN+ultCaOUQmvN0EyBB396nM54iHe+YhM9LeGLdNWXn/n2gNSSWyghhLhMjKWK3P/D5ymUHfrborTHgl5wKKVojwXpb4tSKDvc/8PnGUvJWPhKIoEthBCXAdtx+crPT2ApRVssOOexbbEgllJ85ecnsB33Il2huBAJbCGWUKlUwnXlBU8sv+cnskxmSxcMa6MtFmQyW+L5mjHu+fD5fOzevdt7+8QnPgFUZ7Dfc889DAwMsHPnTq6//nq++91q0ctsNst73/tetmzZwkte8hKuvfZavvzlL5/3HCdPniQSibB792527NjBnXfeOeff2YEDB3j1q1/NFVdcwcDAAB//+MfRWtf1dV3IV7/6VUZGRhr6Oc8lgS3EErFtm7GxMWZmZpb7UoTgJ0cSRIP1TVuKBv08dqS+iVGRSISnn37ae7vnnnsA+LM/+zNGR0fZv38/+/fv5z//8z/JZDIAvOc976GtrY3nn3+ep556iu9973tMT0/PeZ4tW7bw9NNP8+yzz/Lcc8/xH//xH7MeVygU+K3f+i3uuecejhw5wjPPPMMvfvELPv/5z9f1dV2IBLYQl7BcLsfJkycZGhpa7ksRlznX1RydyNIWnf+aX4C2aIDnJ7K47uJao/l8ni9/+cv83d/9HaFQCIA1a9bw+7//+xw7doxf/epX3HfffVhWNZK6urr48IfPW8H6LH6/n5e//OUcPXp01sf/6Z/+iVe84hW8/vWvByAajXL//fd7Lf+/+Iu/4F3vehevetWr2Lx5M5/97GeB6t/vbbfdxjXXXMPOnTv52te+BsATTzzBK1/5Sq699lre8IY3MDo6yje+8Q327t3LW9/6Vnbv3k2hUOCee+5hx44d7Nq1i7vvvnvh37zar7Uhn0UI8SLT09NMTU2RzWbZtWvXgmaFCtEI5TPj0PX+Dprjy45L2PLN6zmFQoHdu3d773/kIx9h+/btrF+/nubm5hcdf+DAAa655hovrOuVz+d59NFH+cu//MtZHz9w4ADXXnvtWR/bsmUL2WyWdDoNwKFDh/jRj35EJpNh27ZtvPe97+V73/sevb29fOc73wEglUpRqVR4//vfzze/+U26urr42te+xr333stDDz3E/fffz6c+9Sn27NnD9PQ0//7v/86hQ4dQSpFMJhf0tZ1LAluIJZJOpymXy9i2jeM4+P3y5yaWR/DMmmqtdV2hbcZ5g3WsyTZd4rWeffbZeT//r/7qr/iXf/kXJiYm5uxiPnbsGLt370Ypxe23386tt94663Fzfc3m47fddhuhUIhQKER3dzfj4+NcffXV3H333Xz4wx/mTW96EzfeeKPXnf+6170OAMdxWLt27Ys+b3NzM+FwmPe85z3cdtttvOlNb5r31z8XeQURYolks1lKpRJKKXK5HC0tLct9SeIyZVmKrd1xhmcKtM9z0hnATL7CQHd80RXQtm7dyunTp8lkMjQ1NZ312I4dO3jmmWdwXRfLsrj33nu59957icfjc35OM4Z9IVdddRWPPfbYWR87fvw48XjcuxbTTQ/VSXO2bXPFFVfwxBNP8Mgjj/CRj3yE17/+9fz2b/82V111FY8//vic5/T7/fzqV7/i0Ucf5eGHH+b+++/nhz/84QWv9UJkDFuIJZLJZCiVSlQqFfL5/HJfjrjMvfKKLvJlu67n5Ms2N12x+G2Oo9Eo7373u/nABz5AuVwGYHR0lH/4h39g69at7Nmzh49+9KM4jgNUK7Y1ahb3W9/6Vn72s5/xgx/8AKh22X/gAx/gQx/60JzPGxkZIRqN8ra3vY27776bJ598km3btpFIJLzArlQqHDhwAICmpiZvEl02myWVSvHGN76Rz3zmM/O6sZgPaWELsUTy+TxKKRzHkcAWy26gO05nPMRMrjyvpV0zuTKd8RAD3XO3dM917hj2Lbfcwic+8Qnuu+8+PvrRj7Jjxw7C4TCxWMwbd37wwQf54Ac/yNatW2lvbycSifDJT36yvi/wPCKRCN/85jd5//vfz/ve9z4cx+Htb387d91115zP27dvHx/84AexLItAIMAXvvAFgsEg3/jGN/jABz5AKpXCtm3+9E//lKuuuop3vOMd3HnnnUQiEb773e9y++23ezcef/u3f9uQr0U1ei1aI+3Zs0fv3bt3uS9DiLpprXn44YepVCoUi0Wuu+46XvKSlyz3ZYlV7uDBg2zfvv28j5tKZxcqnjKTK+NqzV2vHpDypEtotp+XUuqJ8+1mKV3iQiwB13WpVCre2JhsdyhWgp6WMHe9eoBI0MfQTJ7pXNnretZaM50rMzSTJxL0SVivQNIlLsQSqFQq3iQapRSFQmG5L0kIoBraH3zDNp6fyPLYkcRZlcxW2m5d+/bt4+1vf/tZHwuFQvz3f//3oo69VElgC7EEisUitm2jlMJ1XW+ijRArgd9nsX1tM9vXNuM6DuVykWAwjOWb31rri+Xqq6+e94Steo69VElgC7EEisUirut6bxLYYkVxbEgcgqM/wJo8gtfx3bUNtrwGuq4En8TDSiM/ESGWgG3bVCoVUqmUt7RLiBUhPQK/fAByCQjGoLkflAKtITkIj38OYl1ww53Q3LvcVytqLP8ghRCrkOkSd10X27YplUrLfUlCVMP6sU9BJQ+t6yHaUQ1rqP4b7ah+vJKvHpde2s0sRH0ksIVYAoVCAdu28fl8KKUksMXyc+xqy1pZ1WCeS7SjetwvH6g+T6wIEthCLAEzhu33+7EsS8awxfJLHKp2g18orI1oB+Qmqs+rw6W2H3apVOK1r30tu3fv5mtf+xo//elPueqqq7xdt+rxmc98ZkmLJElgC7EEzLIupRRaa+99IZbN0R9Ux6zrEYjBsUfresqlth/2U089RaVS4emnn+YP/uAP+Md//Efuvvtunn76aSKRSF1f+yUR2Eqph5RSE0qp/ed5XCmlPquUOqqUelYp9dJGnFeIlapUKnm7BDmOg+M42LZ0LYpl4roweQQi7fU9L9oBicPV5y/CSt0Pe2Jigre97W08/fTT7N69my9+8Yt8/etf5y//8i9561vfyujoKDfddBO7d+9m586d/PSnPwXgv/7rv3jZy17GS1/6Un7v936PbDbLZz/7WUZGRrj55pu5+eabcRyHd7zjHezcuZOrr766IeVJGzVL/KvA/cDfn+fxW4GBM2+/AXzhzL9CrEomsEOhEEopbNumXC4TDM5/pyQhGsY5MyRT757sSgG6+nxrflXPLqX9sMPhMA8++CCf+tSn+Pa3vw3A448/zpve9CZ+93d/l7/5m7/hDW94A/fee6+3J8Dk5CT33XcfP/jBD4jFYnzyk5/k05/+NB/72Mf49Kc/zY9+9CM6Ozt54oknGB4eZv/+aju2EXtiNySwtdaPKaU2znHI7cDf6+qgwS+VUq1KqbVa69FGnF+IlaZ2kpnZrk/GscWy8Z25UdS6vtDWGlAvPH8eLsX9sM/nuuuu413veheVSoU3v/nN7N69m5/85Cc899xzvOIVrwCgXC7zspe97EXP3bx5M8ePH+f9738/t912m9fCX4yLtQ67DxiseX/ozMdeFNhKqTuAOwDWr19/US5OiEYrl6s1mk2rwXVdmSkulo9lQecVkBqa/6QzgPxUtZjKAlu/xkrfD/t8brrpJh577DG+853v8Pa3v50PfvCDtLW18brXvY5//ud/nvO5bW1tPPPMM3z/+9/nc5/7HF//+td56KGHLni9c7lYk85mu42ZdZswrfWXtNZ7tNZ7uroWvw+rEMvBtKYDgQCWZeG6rmwAIpbX1tdCOVffcyq5auWzRboU98MGOHXqFN3d3fzxH/8x7373u3nyySe54YYb+PnPf+6NmefzeY4cOQKcvSf25OQkruvyO7/zO3z84x/nySefXPTXcrECewhYV/N+PyAr8sWqZV6U4vE4fn+1I0sCWyyrriurFczyU/M7Pj8Fse “text/plain”: [ “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize P-waves boundariesn”, “signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type=’bounds_P’)” ] }, { “cell_type”: “code”, “execution_count”: 15, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzde3xcd33n/9f3nLlrJI0uI8uSfJFtJbbjOE7ihNuDlEAhoSyF3Xb7K6TQcGnKthD+KG1Cb9sWaGm329Iu6aaBUsqD7ZICbX/5ddOlC+WWsAES6iR24mt80310mfv1nPn+/hh/T8a2JGuk0cXy5/l4+GFr5ozOkSXN+3xvn6/SWiOEEEKI9c1a6wsQQgghxJVJYAshhBBXAQlsIYQQ4ioggS2EEEJcBSSwhRBCiKuABLYQQghxFWhKYCulPqeUmlRKHZ7n+XuUUs9d+PM9pdRNzTivEEIIca1oVgv788DdCzx/GvgxrfV+4GPAI006rxBCCHFN8DXjk2itv6OU2r7A89+r+/ApYGAxn7e7u1tv3z7vpxVCCCE2lGeeeWZKax2f67mmBHaD3gf883xPKqXuA+4D2Lp1K08//fRqXZcQQgixppRSZ+d7blUnnSml7qQW2A/Md4zW+hGt9UGt9cF4fM6bDCGEEOKas2otbKXUfuCzwJu11tOrdV4hhBBiI1iVFrZSaivw98C7tNbHV+OcQgghxEbSlBa2Uup/Aq8DupVSw8B/BvwAWuuHgd8GuoC/UEoBOFrrg804txBCCKhUKgwPD1MsFtf6UsQihEIhBgYG8Pv9i35Ns2aJv+MKz78feH8zziWEEOJyw8PDtLa2sn37di40jK6oWtWU3SoB28KyFvcasXxaa6anpxkeHmZwcHDRr1uLWeJCCCGarFgsLiqsHbfKicks3z6e4ORk1nt8qCfKHdfFGeqJ4rOlCOZKUkrR1dVFIpFo6HUS2EIIsUFcKazHU0X++snTTGVLRAI++tpDKKXQWjM8W+Cz332J7miQ97xmkN720Cpd9bVpsb0g9eQ2SgghrgHjqSKf/tcTFMouAx0ROlsCXmgopehsCTDQEaFQdvn0v55gPCVj4euNBLYQQmxwjlvlr588jaUUHS2BBY/taAlgKcVfP3kax62u0hWKxZDAFmIFlUolqlV50xNr68Rklqls6YphbXS0BJjKljhRN8a9GLZtc+DAAe/PJz/5SaA2g/3BBx9kaGiIffv2cfvtt/PP/1wreJnNZvlP/+k/sXPnTm6++WZuvfVWPvOZz8x7jjNnzhAOhzlw4AB79+7l3e9+N5VKZcHreuKJJ7j99tvZvXs3u3fv5pFHmr+dxac+9Sny+XzTP289GcMWYoU4jsP4+DjRaJSurq61vhxxDfv28QSRQGNv95GAj+8cT7Bnc9uiXxMOhzl06NBlj//Wb/0WY2NjHD58mGAwyMTEBN/+9rcBeP/738+OHTs4ceIElmWRSCT43Oc+t+B5du7cyaFDh3Bdlze+8Y383d/9Hffcc8+cx46Pj/POd76Tf/zHf+SWW25hamqKu+66i/7+ft7ylrcs+mu7kk996lP83M/9HJFIpGmf81LSwhZiheRyOc6cOcPw8PBaX4q4hlWrmpOTWToii1/vC9AR8XNiMku1qpd1/nw+z2c+8xn+23/7bwSDQQA2bdrEz/zMz3Dq1Cl+8IMf8PGPfxzLqsVRPB7ngQfmrV59Edu2uf322xkZGZn3mIceeoh7772XW265BYDu7m7+6I/+yGv933vvvdx///28+tWvZseOHXzlK18BYGxsjDvuuIMDBw6wb98+vvvd7wLwL//yL7zqVa/illtu4T/+x/9INpvlz//8zxkdHeXOO+/kzjvvxHVd7r33Xvbt28eNN97In/7pny7tP+8SEthCrJCZmRlvraXWy3vTE2KpyhfGoRudlWyOLzcwjl0oFC7qEn/00Uc5efIkW7dupa3t8pb6kSNHuOmmm7ywblSxWOT73/8+d989/+7OR44c4dZbb73osYMHD3LkyBHv47GxMZ544gn+6Z/+iQcffBCAv/3bv+Wuu+7i0KFDPPvssxw4cICpqSk+/vGP8/Wvf50f/ehHHDx4kD/5kz/h/vvvp6+vj29+85t885vf5NChQ4yMjHD48GGef/553vOe9yzp67uUdIkLsULS6TTlchnHcXBdF59Pft3E6gtcWFOttW4otM1NZqCBNdlzdYk/99xzi379Jz7xCb785S8zOTnJ6OjovMedOnWKAwcOcOLECX76p3+a/fv3z3vsfF93/WNvf/vbsSyLvXv3MjExAcBtt93Ge9/7XiqVCm9/+9s5cOAA3/72t3nhhRd4zWteA0C5XOZVr3rVZZ97x44dvPTSS3zoQx/iLW95C29605sW/X+wEGlhC7FCstkspVKJYrFILpdb68sR1yjLUuzqiTKbX3hi1qVm8xWGeqLLroC2a9cuzp07RyaTuey5vXv38uyzz3oTM3/jN36DQ4cOkU6nF/ycZgz75MmTPPXUUzz22GPzHnvDDTdctk3zM888w969e72PTVc9vHyjcscdd/Cd73yH/v5+3vWud/GFL3wBrTVvfOMbOXToEIcOHeKFF17gr/7qry47Z0dHB88++yyve93reOihh3j/+5tT6FMCW4gVkslkKJVKVCqVFZ89KsRCfuy6OPmy09Br8mWHO65b/hbHkUiE973vfdx///2Uy2Wg1gX9xS9+kV27dnHw4EF+8zd/E9d1gVo392KHkDZv3swnP/lJ/uAP/mDeY375l3+Zz3/+817Lf3p6mgceeIBf+7VfW/Bznz17lp6eHn7hF36B973vffzoRz/ila98JU8++SQnT54EauPzx4/X9rNqbW31bkqmpqaoVqv81E/9FB/72Mf40Y9+tKiv50oksIVYIfl8HqUUrutKYIs1NdQTpTsaZDZXXtTxs7ky3dEgQz3Rhs5z6Ri2GQ/++Mc/TjweZ+/evezbt4+3v/3txOO1m4HPfvazTE9Ps2vXLm699VZ+/Md/nD/8wz9c9Dnf/va3k8/nvUlhl9q8eTNf/OIX+YVf+AV2797Nq1/9at773vfy1re+dcHP+61vfYsDBw5w880389WvfpUPf/jDxONxPv/5z/OOd7yD/fv388pXvpKjR48CcN999/HmN7+ZO++8k5GREV73utdx4MAB7r333gVvKBqh1vNkmIMHD+pLuzKEuBporfnSl75EpVKhWCxy2223cfPNN6/1ZYkN7MUXX2TPnj3zPm8qnV2peMpsrkxVaz74+iEpT7rC5vqeKaWemW83S2lhC7ECqtUqlUrFGxuTLQ/FWuttD/HB1w8RDtgMz+aZyZW9rmetNTO5MsOzecIBW8J6nZJpq0KsgEqlQrVaxbIslFIUCoW1viQh6G0P8at3Xc+JySzfOZ64qJLZetut6/nnn+dd73rXRY8Fg0G+//3vz3n81772tcvWbw8ODvIP//APK3aNq00CW4gVUCwWcRwHpRTVatWbbCPEWvPZFns2t7FncxtV16VcLhIIhLBse60v7SI33njjnFXT5nPXXXdx1113reAVrT0JbCFWQLFYpFqten8ksMW64TqQOAonv441dRyv4zt+Pex8A8R3gy3RsB7Jd0WIFeA4DpVKhVQq5S3tEmLNpUfhqYchl4BAC7QNgFKgNSTPw/99CFri8MoPQFvfWl+tuMTaD1QIsQGZLvFqtYrjOJRKpbW+JHGtS4/Cd/4YKnmIbYVIVy2sofZ3pKv2eCVfOy49f6UxsTYksIVYAYVCAcdxsG0bpZQEtlhbrlNrWSurFswLiXTVjnvq4drrxLohgS3ECjBj2D6fD8uyZAxbrK3E0Vo3+JXC2oh0QW6y9roGXI37YScSCV7xildw8803893vfpcvf/nL7NmzhzvvvLOhrx3g93//9xt+TSNkDFuIFWCWdSml0FpftMxLiFV38uu1MetG+Fvg1Degd9+iX3I17of9jW98g927d/M3f/M3ANx99938xV/8xZID+9d//dcbft1iNeXdQyn1OaXUpFLq8DzPK6XUnyulTiqlnlNK3dKM8wqxXpVKJW+XINd1cV0Xx5HuRbEGqlWYOg7hzsZeF+mCxLHa65dhPe+HfejQIX7t136Nxx9/nAMHDvC7v/u7PPHEE3zgAx/gV3/1Vzly5Ai33347Bw4cYP/+/Zw4cQKAL37xi97jv/iLv4jrujz44INeadZ77rmHXC7HW97yFm666Sb27dvHo48+upz/RqB5LezPA58GvjDP828Ghi78eQXw3y/8LcSGZAI7GAyilMJxHMrlMoHA/CUhhVgR7oXhmAb3w64dr2uvtxZX9cwElvHRj36UPXv2rPh+2H/2Z3827zFHjhzh53/+5y96zOyHfeDAAX7v936Pp59+mk9/+tMAfPOb3+SP//iPOXjwIB/60If48Ic/zD333EO5XMZ1XV588UUeffRRnnzySfx+P7/0S7/E//gf/4NPfvKTfPrTn/Z6GL761a/S19fH//pf/wuAVCq1pK+xXlMCW2v9HaXU9gUOeRvwBV2rg/eUUiqmlNqstR5rxvmFWG/qJ5nZtu0FthCrzr5wk6h1Y6GtNaBefv0iXK37Yc/nVa96FZ/4xCcYHh7mP/yH/8DQ0BDf+MY3eOaZZ7jtttuA2k1KT0/PZa+98cYb+chHPsIDDzzAv/t3/47Xvva1VzzflazWgFo/cL7u4+ELj11GKXWfUupppdTTiURiVS5OiGYrl2t1mk3LoVqtykxxsTYsC7qvg8JMY6/LT9eKqSxz3sXVsB/2fN75znfy2GOPEQ6Hueuuu/jXf/1XtNb8/M//vLcn9rFjx/id3/mdy1573XXX8cwzz3DjjTfy0Y9+lN/7vd+74vmuZLUCe65bmTm3CdNaP6K1Pqi1Pmi2XxPiamNa036/H8uyq “text/plain”: [ “
” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [

5.15. Extract and Visualize Individual Heartbeats 81 NeuroKit2, Release 0.0.39

“# Visualize R-waves boundariesn”, “signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type=’bounds_R’)” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “Unlike the peak method, the continuous wavelet method does not idenfity the Q- peaks and S-peaks. However, it provides more information regarding the bound- aries of the wavesn”, “n”, “Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly iden- tified here.n”, “n”, “Last but not least, we will look at the third method in Neurokit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.html# neurokit2.ecg_delineate>) function: the discrete wavelet method. “ ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “### Discrete Wavelet Method (DWT)” ] }, { “cell_type”: “code”, “execution_count”: 16, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXhU9dn4//fnzExmyQIJBCEgyo4hQBBQFksBFVSs0rr0aXGhar1sFW2vBxSL7dcLaat9rPWxqNRay2OrFXdr0epPtIIoKmhYZQsghLBkI9tMZjnn8/tjOGOAkMyQSSbA/bquXDoz5zPnHAJzz2e7b6W1RgghhBAdm5HqCxBCCCFEyyRgCyGEECcBCdhCCCHESUACthBCCHESkIAthBBCnAQkYAshhBAngaQEbKXUM0qpg0qpDcd5fYZSat3hn4+VUsOTcV4hhBDidJGsHvZi4JJmXt8JfFtrPQx4AHgqSecVQgghTgvOZLyJ1nq5UursZl7/uNHDVUCveN63a9eu+uyzj/u2QgghxCllzZo15Vrr3KZeS0rATtDNwNvHe1EpdStwK0Dv3r1ZvXp1e12XEEIIkVJKqa+P91q7LjpTSk0iGrDvOd4xWuuntNajtNajcnOb/JIhhBBCnHbarYetlBoGPA1cqrWuaK/zCiGEEKeCdulhK6V6A68C12utt7bHOYUQQohTSVJ62EqpfwATga5KqRLg/wEuAK31IuBXQBfgCaUUQERrPSoZ5xZCCAHhcJiSkhIaGhpSfSkiDh6Ph169euFyueJuk6xV4j9o4fVbgFuScS4hhBDHKikpITMzk7PPPpvDHaMWWZYmZFqkOQwMI742ovW01lRUVFBSUkKfPn3ibpeKVeJCCCGSrKGhIa5gHTEtth2s48OtZWw/WBd7fkC3DCYMzGVAtwycDkmC2ZaUUnTp0oWysrKE2knAFkKIU0RLwXp/dQN/XbmT8rogvjQneZ08KKXQWlNSFeDpFTvomuHmR+P70L2Tp52u+vQU7yhIY/I1SgghTgP7qxtY+P42AiGTXtk+ctLTYkFDKUVOehq9sn0EQiYL39/G/mqZC+9oJGALIcQpLmJa/HXlTgylyE5Pa/bY7PQ0DKX468qdREyrna5QxEMCthBtKBgMYlnyoSdSa9vBOsrrgi0Ga1t2ehrldUG2NZrjjofD4aCwsDD28+CDDwLRFexz585lwIABFBQUcN555/H229GEl3V1dfzkJz+hX79+jBgxgpEjR/LnP/+5yfdfv3597L1zcnLo06cPhYWFXHTRRU0ev2vXLrxeL4WFheTn53Pbbbed0L/HxYsXc8cddyTcLtlkDluINhKJRNi/fz8ZGRl06dIl1ZcjTmMfbi3Dl5bYx70vzcnyrWWc0yMr7jZer5eioqJjnv/lL3/Jvn372LBhA263mwMHDvDhhx8CcMstt9C3b1+2bduGYRiUlZXxzDPPNPn+Q4cOjb3/zJkzufzyy7n66qubvaZ+/fpRVFREJBJh8uTJvP7663zve9+L+546EulhC9FG6uvr2bVrFyUlJam+FHEasyzN9oN1ZPvi3+8LkO1zse1gHZalW3V+v9/Pn//8Z/74xz/idrsBOOOMM7j22mspLi7ms88+Y8GCBRhGNBzl5uZyzz3HzV59wpxOJ+PGjWP79u2UlZVx1VVXMXr0aEaPHs3KlSsB+Oyzzxg3bhwjRoxg3LhxbNmy5Zj3Wbp0KWPHjqW8vJyXXnqJgoIChg8fzoQJE5J+zcfcQ5ufQYjTVGVlJRUVFdTV1TFs2LATWhUqRGuFDs9DJ/r3zz4+ZFp4DEdcbQKBAIWFhbHH9957L+eccw69e/cmK+vYnvrGjRsZPnx4LFi3Jb/fz7Jly5g/fz533XUXP//5z7ngggvYvXs3U6dO5auvvmLw4MEsX74cp9PJe++9xy9+8QteeeWV2Hu89tprPPLII7z11ltkZ2czf/583nnnHXr27MmhQ4fa/B4kYAvRRmpqagiFQkQiEUzTxOmUf26i/aUd3lOttU4oaGutj2gfj6aGxNetWxd3+1//+te89NJLHDx4kNLS0rjbNae4uJjCwkKUUlx55ZVceuml3HjjjWzatCl2TE1NDbW1tVRXV3PjjTeybds2lFKEw+HYMR988AGrV6/m3XffjX35GD9+PDNnzuTaa69tl2F2+QQRoo3U1dURDAZRSlFfX0+nTp1SfUniNGQYiv7dMthbFSAnzkVnAFX+MAO6ZbQ6A1r//v3ZvXs3tbW1ZGZmHvFafn4+a9euxbIsDMNg3rx5zJs3j4yMjFadszF7Drsxy7L45JNP8Hq9Rzw/a9YsJk2axGuvvcauXbuYOHFi7LW+ffuyY8cOtm7dyqhR0czaixYt4tNPP2Xp0qUUFhZSVFTUputVZA5biDZSW1tLMBgkHA7j9/tTfTniNPbtgbn4Q5GE2vhDESYMbH2JY5/Px80338ydd95JKBQCYN++ffz973+nf//+jBo1ivvuuw/TNIFoxja7d99WpkyZwsKFC2OP7YBeXV1Nz549gejK8MbOOussXn31VW644QY2btwIRHvv559/PvPnz6dr167s2bOnTa9bArYQbcTv96OUwjRNCdgipQZ0y6Brhpuq+lBcx1fVh+ia4WZAt8R6uvYctv0zd+5cABYsWEBubi75+fkUFBQwffp0cnOjXwaefvppKioq6N+/PyNHjuSiiy7ioYceSuwGE/TYY4+xevVqhg0bRn5+PosWLQLg7rvv5t5772X8+PGxLxCNDRo0iOeee45rrrmG4uJi5syZw9ChQykoKGDChAkMHz68Ta9btfU3mdYYNWqUXr16daovQ4iEaa154YUXCIfDNDQ0MHr0aEaMGJHqyxKnsK+++opzzjnnuK/bmc5aSp5SVR/C0po7Jg+Q9KRtrKnfmVJqzfGqWUoPW4g2YFkW4XA4to1FSh6KVOveycMdkwfgTXNQUuWnsj4UG3rWWlNZH6Kkyo83zSHBuoOSRWdCtIFwOBxbSKOUIhAIpPqShKB7Jw9zpg5i28E6lm8tOyKTWUer1rV+/Xquv/76I55zu918+umnrTr2ZCYBW4g20NDQQCQSQSmFZVmxxTZCpJrTYXBOjyzO6ZGFZZqEQg2kpXkwHPHttW4vjbOaJfPYk5kEbCHaQENDA5ZlxX4kYIsOw4xA2WbY/h5G+VZiA9+5g6DfhZA7GBwSGjoi+a0I0QYikQjhcJjq6urY1i4hUq6mFFYtgvoySEuHrF6gFGgNh/bAJ49Dei6MuQ2y8lJ9teIoqZ+oEOIUZA+JW5ZFJBIhGAym+pLE6a6mFJY/DGE/dO4Nvi7RYA3R//q6RJ8P+6PH1SQn05hIHgnYQrSBQCBAJBLB4XCglJKALVLLjER71sqIBubm+LpEj1u1KNpOdBgSsIVoA/YcttPpxDAMmcMWqVW2OToM3lKwtvm6QP3BaLsEdLR62BAtMDJ58mQGDhzIgAEDeOCBB5KeSW3x4sVJy33eHJnDFqIN2Nu6lFJorY/Y5iVEu9v+XnTOOhGudCheBt0L4m7S0ephBwIBrrjiCp588kmmTJmC3+/nqquu4oknnuD222+P+75asnjxYgoKCsjLa9t5/6R8eiilnlFKHVRKbTjO60op9ZhSartSap1S6txknFeIjioYDMaqI5mmiWmaRCIyvChSwLKgfCt4cxJr5+sCZVui7VshlfWwn3/+ecaPH8+UKVOAaF7zhQsXxnr+999/PzfddBMTJ06kb9++PPbYY0C0lv20adMYPnw4BQUFLFmyBIA1a9bw7W9/m5EjRzJ16lT27dvHyy+/zOrVq5kxYwaFhYUEAgHmzp1Lfn4+w4YNY/bs2Um5F0heD3sxsBB49jivXwoMOPxzPvDk4f8KcUqyA7bb7UYpRSQSIRQKkZYWf7UkIZLCPDwdk2g9dqUAHW1vxJf1rKPVw964cSMjR4484rl+/fpRV1dHTU0NAJs3b+aDDz6gtraWQYMG8ZOf/IR///vf5OXlsXTpUiBaFCQcDjNr1izeeOMNcnNzWbJkCfPmzeOZZ55h4cKFPPzww4waNYrKykpee+01Nm/ejFIqqXWykxKwtdbLlVJnN3PIlcCzOjpxsEop1Vkp1UNrvS8Z5xeio2m8yMzhcMQCthDtznH4S6LWiQVtrQH1Tfs4dLR62M3VALefnzZtGm63G7fbTbdu3Thw4ABDhw5l9uzZ3HPPPVx++eV861vfYsOGDWzYsIGLL74YANM06dGjxzHvm5WVhcfj4ZZbbmHatGlcfvnlrb4PW3tNqPUEGtcdKzn83DGUUrcqpVYrpVaXlZW1y8UJkWyhUDRPs91zsCxLVoqL1DAM6DoQApWJtfNXRJOptLL327ge9tEa18MGmDdvHkVFRbHeb2sNGTKEowtI7dixg4yMjFhtbnuYHr75cj1w4EDWrFnD0KFDuffee5k/fz5aa4YMGUJRURFFRUWsX7+ed99995hzOp1OPvvsM6666ipef/11LrnkkqTcC7RfwG7qK06Ty/S01k9prUdprUfZ5deEONnYvWmXy4VhGFiWJQVAROr0vwhC9Ym1CddHM5+1UirrYc+YMYOPPvqI9957D4gO2d95553cfffdzbYrLS3F5/Nx3XXXMXv2bL744gsGDRpEWVkZn3zyCRBdWGrXxc7MzIx9Iamrq6O6uprLLruMRx99NKkpU9trlXgJcGajx70A2ZUvTln2B1NGRgZOZ/SfmQRskTK5g6MZzPwV8W3t8ldAerdouwQcPYd9ySWX8OCDD7JgwQLuu+8+8vPz8Xg8pKenM3/+fCBaD3vOnDn079+fnJwcvF5v0uphe71e3njjDWbNmsXtt9+OaZpcf/313HHHHc22W79+PXPmzMEwDFwuF08++SRpaWm8/PLL3HnnnVRXVxOJRPjZz37GkCF “text/plain”: [ “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Delineate the ECG signaln”, “signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type=’all’)” ] }, { “cell_type”: “code”, “execution_count”: 17, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXCkd33v+/fv6b3VUrfW0UiaRbMwi8f2GMYmsYMDhoBNknJuFi7JOQSoUC5y4yQ3dZJgIORUOaQOVHJS9xDIcQyHcFNJBS4FJJzgHBJIDoTF2ANeZsbj2TeN9pZ6X5/n+d0/Wr/HPWONRq1uST3S91WlGkn9dPejUas/z2/7/pTWGiGEEEK0N2u9T0AIIYQQNyeBLYQQQtwCJLCFEEKIW4AEthBCCHELkMAWQgghbgES2EIIIcQtoCWBrZT6rFJqWil1/Aa3/wel1IsLH99TSt3ZiucVQgghNotWtbA/Bzy4xO0XgJ/UWt8B/BHwZIueVwghhNgU/K14EK31t5VSO5e4/Xt1Xz4NjCzncfv6+vTOnTd8WCGEEGJD+eEPfzirte5f7LaWBHaDfg34pxvdqJR6BHgEYPv27Rw9enStzksIIYRYV0qpSze6bU0nnSml3kQtsD9wo2O01k9qrY9orY/09y96kSGEEEJsOmvWwlZK3QF8BnhIa51cq+cVQgghNoI1aWErpbYDXwbepbU+vRbPKYQQQmwkLWlhK6X+Dngj0KeUGgP+MxAA0Fo/Afwh0Av8hVIKwNZaH2nFcwshhFhatVplbGyMUqm03qciFoTDYUZGRggEAsu+T6tmif/yTW5/H/C+VjyXEEKIxoyNjdHZ2cnOnTtZaDTdlOtqKo5L0GdhWcu7j1gerTXJZJKxsTFGR0eXfb/1mCUuhBBiDZVKpWWFte24nJnO8a3TM5ydznnf3zsQ4/7X9LN3IIbfJwUym6WUore3l5mZmYbuJ4EthBCbwM3CejJd4q++e4HZXJlo0M9QPIxSCq01Y/NFPvPv5+mLhXjvfaMMxsNrdNYb13J7OurJpZIQQmxyk+kSn/zXMxQrDiPdUXo6gl6gKKXo6Qgy0h2lWHH45L+eYTItY+HrQQJbCCE2Mdtx+avvXsBSiu6O4JLHdncEsZTir757Adtx1+gMhSGBLcQqKpfLuK68sYn2dWY6x2yufNOwNro7gszmypypG+NeDp/Px+HDh72Pj33sY0BtBvtjjz3G3r17OXToEPfccw//9E+1Ypi5XI5f//VfZ/fu3dx111287nWv49Of/vSij3/s2DHvsXt6ehgdHeXw4cO85S1vWfT4ixcvEolEOHz4MAcPHuT973//iv5WP/e5z/Hoo482fL+VkDFsIVaJbdtMTk4Si8Xo7e1d79MRYlHfOj1DNNhYFESDfr59eoYDW7uWfZ9IJMLzzz//qu9/5CMfYWJiguPHjxMKhZiamuJb3/oWAO973/vYtWsXZ86cwbIsZmZm+OxnP7vo499+++3e47/nPe/hZ37mZ/jFX/zFJc9p9+7dPP/889i2zQMPPMDf//3f8/M///PL/pnWmrSwhVgl+XyeixcvMjY2tt6nIsSiXFdzdjpHd3T5a4EBuqMBzkzncF3d1PMXCgU+/elP8+d//ueEQiEAtmzZwjve8Q7OnTvHM888w0c/+lEsqxZV/f39fOADN6xsvWJ+v597772Xs2fPMjMzwy/8wi9w9913c/fdd/Pd734XgGeeeYZ7772Xu+66i3vvvZdTp0696nG+9rWv8eM//uPMzs7yxS9+kUOHDnHnnXdy//33t+Y8W/IoQohXmZubI5lMksvluOOOO1Y0K1SI1VRZGIdu9LVpjq84LmHLt6z7FItFDh8+7H39wQ9+kAMHDrB9+3a6ul7dUj9x4gR33nmnF9arqVAo8M1vfpPHH3+c3/7t3+Z3fud3+Imf+AkuX77M2972Nk6ePMn+/fv59re/jd/v5xvf+AYf+tCH+NKXvuQ9xle+8hX+7M/+jKeeeoru7m4ef/xxvv71rzM8PEwqlWrJeUpgC7FKMpkMlUoF27ZxHAe/X/7cRHsJLqyp1lo3FNpa62vuvxyLdYm/+OKLy77/H//xH/PFL36R6elpxsfHl32/pZw7d47Dhw+jlOLhhx/moYce4t3vfjcvvfSSd0wmkyGbzZJOp3n3u9/NmTNnUEpRrVa9Y/7t3/6No0eP8s///M/excd9993He97zHt7xjne0rJtd3kGEWCW5XI5yuYxSinw+TzweX+9TEuIalqXYMxDj6nyRnmVOOgOYL1TZOxBrugLanj17uHz5Mtlsls7OzmtuO3jwIC+88AKu62JZFh/+8If58Ic/TCwWa+o565kx7Hqu6/L973+fSCRyzfd/8zd/kze96U185Stf4eLFi7zxjW/0btu1axfnz5/n9OnTHDlSq7r9xBNP8IMf/ICvfe1rHD58mOeff77puSwyhi3EKslms5TLZarVKoVCYb1PR4hF/eRr+ilU7IbuU6jY3P+a5rc/jkaj/Nqv/Rq/9Vu/RaVSAWBiYoK/+Zu/Yc+ePRw5coQ/+IM/wHEcoFaxzbTuV8tb3/pWPvnJT3pfm0BPp9MMDw8DtZnh9Xbs2MGXv/xlfvVXf5UTJ04Atdb761//eh5//HH6+vq4cuVK0+cmgS3EKikUCiilcBxHAlu0rb0DMfpiIebzlWUdP5+v0BcLsXegsZauGcM2H4899hgAH/3oR+nv7+fgwYMcOnSIn/u5n6O/v3Yx8JnPfIZkMsmePXt43etex1ve8hY+/vGPN/YDNugTn/gER48e5Y477uDgwYM88cQTAPz+7/8+H/zgB7nvvvu8C4h6+/bt42//9m/5pV/6Jc6dO8fv/d7vcfvtt3Po0CHuv/9+7rzzzqbPTa321Uozjhw5oo8ePbrepyFEw7TWfP7zn6darVIqlbj77ru566671vu0xCZ18uRJDhw4cMPbTaWzmxVPmc9XcLXm0Qf2SnnSFljs96KU+uGNdrOUFrYQq8B1XarVqrdURbY1FO1sMB7m0Qf2Egn6GJsvMJeveF3PWmvm8hXG5gtEgj4J63Ukk86EWAXVatWbLKOUolgsrvcpCbGkwXiY33vbPs5M5/j26ZlrKpm1225dx44d413vetc13wuFQvzgBz9o6th2J4EtxCoolUrYto1SCtd1vQk1QrQzv8/iwNYuDmztwnUcKpUSwWAYy7e8tdZrpb6qWSuPbXcS2EKsglKphOu63ocEtrglODbMvAxnv4E1exqv47t/H+x+M/TvB5/ExnqR/3khVoFt21SrVdLptLe0S4i2lhmHp5+A/AwEO6BrBJQCrSF1Bb7/Kejohx97P3QNrffZbkrrPxghxAZkusRd18W2bcrl8nqfkhA3lhmHb/8pVAuQ2A7R3lpYQ+3faG/t+9VC7bhMayqNicZIYAuxCorFIrZt4/P5UEpJYIv25di1lrWyasG8lGhv7binn6jdT6wpCWwhVoEZw/b7/ViWJWPYon3NvFzrBr9ZWBvRXshP1+7XgNXeDxsa2+Na9sMWQgCvLOtSSqG1vmaZlxBt5ew3amPWjQh0wLlvwuChZd9ltffDNhrZ43pT7oetlPqsUmpaKXX8BrcrpdQnlFJnlVIvKqVe24rnFaJdlctlbwckx3FwHAfbli5E0WZcF2ZPQ6SnsftFe2HmVO3+TVjN/bDr97hu5Nh23g+7VZf7nwMeXOL2h4C9Cx+PAP+9Rc8rRFsygR0KhVBKYdu2dIuL9uMsvCYb3atdKUC/cv9luL6W+Be+8AXOnj27avthmz2ub7/99oaONfthP/vss3zpS1/ife97H4C3H/Zzzz3H448/zoc+9KFrHuMrX/kKH/vYx3jqqafo6+vz9sN+4YUX+OpXv7qin+F6LekS11p/Wym1c4lDHgb+Wtdq3T2tlEoopbZqrSda8fxCtJv6SWY+n08CW7Qn30LdcK0bC22tAfXK/ZdhrfbDXmyP60aOlf2wYRio31tsbOF7rwpspdQj1FrhbN++fU1OTohWq1RqtZhN68B1XZkpLtqPZUHfayA9tvxJZwCFZK2YSpNzMlZjP+zF9rhu5FjZDxsWu3RbdJswrfWTWusjWusjZos1IW41pjUdCASwLAvXdWUDENGe9rwFKvnG7lPN1yqfNUn2w27MWgX2GLCt7usRQFbeiw3LvPnEYjH8/lpHlgS2aEv9+2sVzArJ5R1fSELHQO1+DZD9sNtoP+yFMex/1Fq/ap6/UuqngUeBtwOvBz6htb7nZo8p+2GLW9Vf/uVfkkwmuf/++zlx4gSzs7P87M/+LHfcccd6n5rYhG62H7ZX6exmxVMKSdAu3P+7Up60BRrdD7slY9hKqb8D3gj0KaXGgP8MBAC01k8AT1EL67NAAXhvK55XiHZlrsA7OjoIBAIAssWmaF9dQ7UQfvoJSF2GQPSV8qRa14K6mq+1rKWW+Lpp1SzxX77J7Rr4jVY8lxC3ArPmOhKJEA7X9jwqFArreUpCLK1rCN78h7UKZue+WVtnzcJs8DbbrUv2wxZCtIypchYKhQiFQmitZZa4WFemkM+SfP5a9bLBQ7WiKE6ltnSrzSr0bYT9sFcyHN1evwUhNgDXdb0u8VAoREdHreyjTDoT6yUcDpNMJhsLCcuCQLjtwnoj0FqTTCa93rflkha2EC1mttW0LItgMOit55QWtlgvIyMjjI2NMTMzs96nIhaEw2FGRkYauo8EthAtVq1Wve7HQCBANBoFJLDF+gkEAoyOjq73aYgmSV+HEC1m2zZaa3w+H36/36vMJJt/CCGaIYEtRIs5juONFfr9fsLhMEopqSUuhGiKBLYQLVapVHBdF7/fj8/n88awpYUthGiGBLYQLWZmg1uWhWVZXgt7sXKGQgixXBLYQrSY2anL5/N5a7EBCWwhRFMksIVo “text/plain”: [ “

82 Chapter 5. Examples NeuroKit2, Release 0.0.39

] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize P-peaks and T-peaksn”, “signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type=’peaks’)” ] }, { “cell_type”: “code”, “execution_count”: 18, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeZBcZ33/+/dzTu/Ts3ePRqPRMrJkLZZlgWUC4drYZjOYhNwiyQ0B7jXgUE6xJFXXBKdI+KXAqcDvRwg/wuIQx3FxU/lBQhZyE5PkkoWdgGxkS7JkWYulGWmWnp7pnt67zznP/aP1HLfkkTQ9c0Yzkr6vqilpuk93n9n6c57t+yitNUIIIYRY3ayVPgEhhBBCXJ4EthBCCHEVkMAWQgghrgIS2EIIIcRVQAJbCCGEuApIYAshhBBXgUACWyn1mFJqSil18CL3v0Mp9cy5jx8opW4J4nWFEEKI60VQLezHgXsucf9J4DVa693AJ4AvB/S6QgghxHUhFMSTaK2/o5TadIn7f9Dy6Y+A4YU8byqV0ps2XfRphRBCiGvKk08+Oa21Ts93XyCB3ab3At+82J1KqfcB7wPYsGED+/btu1LnJYQQQqwopdSpi913RSedKaXuohnYH7nYMVrrL2ut92qt96bT815kCCGEENedK9bCVkrtBh4F3qS1zl6p1xVCCCGuBVekha2U2gD8LfAurfXRK/GaQgghxLUkkBa2Uup/AXcCKaXUGPDfgDCA1voR4GNAP/BFpRSAo7XeG8RrCyGEaGo0GoyNjVGtVlf6VMRlxGIxhoeHCYfDC35MULPE336Z++8H7g/itYQQQsxvbGyMzs5ONm3axLnG0WV5nqbuekRsC8ta2GPE0mityWazjI2NMTIysuDHrcQscSGEEMugWq0uKKwd1+P5qSLfPprh2FTRv33rQJI7bkyzdSBJyJZCmMtFKUV/fz+ZTKatx0lgCyHENeRyYT2Rr/Ln3z/JdLFGIhJiqDuGUgqtNWOzFR797glSySjvfvUIg92xK3TW15+F9oC0kksoIYS4Tkzkq3z+35+nUncZ7k3Q1xHxg0MpRV9HhOHeBJW6y+f//Xkm8jIWvppIYAshxHXAcT3+/PsnsZSityNyyWN7OyJYSvHn3z+J43pX6AzF5UhgC7GMarUanidveGLlPT9VZLpYu2xYG70dEaaLNZ5vGeNeCNu22bNnj//xyU9+EmjOYH/ooYfYunUru3bt4hWveAXf/Gaz6GWxWOTXf/3XueGGG3jZy17Grbfeyp/+6Z/O+/wHDhzwn7uvr4+RkRH27NnD6173uoue06FDh7j77ru58cYb2bp1K5/4xCfQWrf1dV3O448/ztmzZwN9zgvJGLYQy8RxHCYmJkgmk/T396/06Yjr3LePZkhE2nvLT0RCfOdohh1ruxb8mHg8zv79+19y++/+7u8yPj7OwYMHiUajTE5O8u1vfxuA+++/n82bN/P8889jWRaZTIbHHnts3ue/+eab/ee/7777eMtb3sIv/uIvXvR8KpUKP//zP8+XvvQl3vCGN1Aul3nb297GF7/4Rd7//vcv+Ou6nMcff5xdu3YxNDQU2HNeSFrYQiyTUqnECy+8wNjY2EqfirjOeZ7m2FSR3sTC1/wC9CbCPD9VxPOW1hotl8v86Z/+KX/8x39MNBoFYM2aNfzyL/8yx48f58c//jEPP/wwltWMpHQ6zUc+ctEK1m35y7/8S1796lfzhje8AYBEIsHnP/95v+X/e7/3e7znPe/hzjvvZPPmzXzuc58Dmn+/9957L7fccgu7du3ia1/7GgBPPvkkr3nNa7j11lt54xvfyPj4OF//+tfZt28f73jHO9izZw+VSoWHHnqInTt3snv3bh588MFAvhZpYQuxTGZmZshmsxSLRXbv3r2oWaFCBKF+bhy63d9Bc3zd9YhZ9oIeU6lU2LNnj//5b//2b7Njxw42bNhAV9dLW+qHDh3illtu8cM6aIcOHeLWW28977YbbriBYrHI3NwcAEeOHOE//uM/KBQKbNu2jV//9V/nn//5nxkaGuKf/umfAMjn8zQaDT74wQ/yjW98g3Q6zde+9jU++tGP8thjj/H5z3+eT3/60+zdu5eZmRn+7u/+jiNHjqCUIpfLBfK1SGALsUzm5uao1+s4joPruoRC8ucmVkbk3JpqrXVboW3GeSNtrMmer0v8mWeeWfDjf//3f5+//uu/ZmpqKpAx4Ut9zeb2e++9l2g0SjQaZWBggMnJSW6++WYefPBBPvKRj/CWt7yF22+/nYMHD3Lw4EFe//rXA+C6LmvXrn3J83Z1dRGLxbj//vu59957ectb3rLkrwOkS1yIZVMsFqnValSrVUql0kqfjriOWZZiy0CS2XKjrcfNlhtsHUguuQLali1bOH36NIVC4SX37dy5k6efftqfnPnRj36U/fv3+63fpbrppptesk3ziRMnSCaTdHZ2Avjd9NCcNOc4DjfeeCNPPvkkN998M7/927/Nxz/+cbTW3HTTTezfv5/9+/dz4MAB/vVf//UlrxkKhfjxj3/M2972Nv7+7/+ee+65J5CvRQJbiGVSKBSo1Wo0Gg3K5fJKn464zr3mxjTlutPWY8p1hztuXPo2x4lEgve+97186EMfol6vAzA+Ps5f/MVfsGXLFvbu3cvv/M7v4Lou0KzYFtQs7ne84x1873vf41vf+hbQ7LL/0Ic+xG/91m9d8nFnz54lkUjwzne+kwcffJCnnnqKbdu2kclk+OEPfwg0Z74fOnQIgM7OTv+CpFgsks/nefOb38xnP/vZeSfhLYb00QmxTMrlMkopXNeVwBYrbutAklQyymypvqClXbOlOqlklK0DybZe58Ix7HvuuYdPfvKTPPzww/zO7/wOO3fuJBaL0dHRwcc//nEAHn30UT784Q+zZcsW+vr6iMfjfOpTn2rvC7yIeDzON77xDT74wQ/y/ve/H9d1ede73sUHPvCBSz7uwIEDfPjDH8ayLMLhMF/60peIRCJ8/etf50Mf+hD5fB7HcfjN3/xNbrrpJu677z4eeOAB4vE43/zmN3nrW9/qX3j80R/9USBfiwp6LVqQ9u7dqy/syhDiaqC15qtf/SqNRoNqtcptt93Gy172spU+LXGNO3z4MDt27Ljo/abS2eWKp8yW6nha84G7t0p50mU0389LKfXkxXazlC5xIZaB53k0Gg1/bEy2OxSrwWB3jA/cvZV4xGZstsxMqe53PWutmSnVGZstE4/YEtarkHSJC7EMGo0GnudhWRZKKSqVykqfkhBAM7Q//MZtPD9V5DtHM+dVMlttu3UdOHCAd73rXefdFo1G+a//+q8lHXu1ksAWYhlUq1Ucx0Ephed5/kQbIVaDkG2xY20XO9Z24bku9XqVSCSGZS9srfWV0lrVLMhjr1YS2EIsg2q1iud5/ocEtlhVXAcyR+DYt7Cmj+J3fKe3wQ2vhfR2sCUeVhv5iQixDBzHodFokM/n/aVdQqwKc2fhR49AKQORDugaBqVAa8iNwg+/AB1peOUD0LV8dbFF+1Z+kEKIa5DpEvc8D8dxqNVqK31KQjTD+jufhkYZejZAor8Z1tD8N9HfvL1Rbh43t7y7T4n2SGALsQwqlQqO42DbNkopCWyx8lyn2bJWVjOYLyXR3zzuR480HydWBQlsIZaBGcMOhUJYliVj2GLlZY40u8EvF9ZGoh9KU83HteFq2w+7Vqvxute9jj179vC1r32N7373u9x0003+rlvt+OxnP7usRZJkDFuIZWCWdSml0Fqft8xLiBVx7FvNMet2hDvg+L/B4K4FP+Rq2w/7pz/9KY1Gw3/OBx54gAcffJB3v/vdC/6ajc9+9rO8853vJJFItP3YhQjk3UMp9ZhSakopdfAi9yul1OeUUseUUs8opV4exOsKsVrVajV/lyDXdXFdF8eRrkWxQjwPpo9CvK+9xyX6IfNc8/FLsFr3w56amuKd73wn+/fvZ8+ePfzJn/wJf/VXf8XHP/5x3vGOdzA+Ps4dd9zBnj172LVrF9/97ncB+Nd//Vde9apX8fKXv5xf+qVfolgs8rnPfY6zZ89y1113cdddd+G6Lvfddx+7du3i5ptvDqQ8aVAt7MeBzwNfucj9bwK2nvv4GeBL5/4V4ppkAjsajaKUwnEc6vU6kcjlazgLETj33JBMu3uyKwXo5uOthVU9u5r2w47FYjz66KN8+tOf5h//8R8B+OEPf+i32v/wD/+QN77xjXz0ox/19wSYnp7m4Ycf5lvf+hYdHR186lOf4jOf+Qwf+9jH+MxnPsN//Md/kEqlePLJJzlz5gwHDzbbsUHsiR1IYGutv6OU2nSJQ94KfEU3Bw1+pJTqUUqt1VqPB/H6Qqw2rZPMzHZ9Mo4tVox97kJR6/ZCW2tAvfj4Bbga98O+mNtuu433vOc9NBoNfuEXfoE9e/bw7W9/m2effZZXv/rVANTrdV71qle95LGbN2/mxIkTfPCDH+Tee+/1W/hLcaUG1NYBoy2fj5277SWUUu9TSu1TSu3LZDJX5OSECFq93qzRbFoNnufJTHGxciwLUjdCZaa9x5WzzWIqS2z9rvb9sC/mjjvu4Dvf+Q7r1q3jXe96F1/5ylfQWvP617/e3xP72Wef5c/+7M9e8tje3l6efvpp7rzzTr7whS9w//33L/lruVKBPd9lzLzbhGmtv6y13qu13ptOL30fViFWgmlNh8NhLMvC8zzZAESsrC2vg3qpvcc0Ss3KZ0t0Ne6HDXDq1CkGBgb4tV/7Nd773vfy1FNP8cpXvpLvf//7HDt2DGiOzx89ehQ4f0/s6elpPM/jbW97G5/4xCd46qmnlvy1XKnAHgPWt3w+DMiKfHHNMm9KyWSSUKg58iSBLVZUenuzglk5u7Djy1noGGg+rg1mDNt8PPTQQwA8/PDDp “text/plain”: [ “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# visualize T-wave boundariesn”, “signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type=’bounds_T’)” ] }, { “cell_type”: “code”, “execution_count”: 21, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeZScd33n+/fveWqv6r2r1epu7WprsSwLWzYExsZmMTZm4uQmkwkh3EMC40NODJlzrhnMJSFzgucE7pAMQwxxiK+HmzszmEySCSQxkAsh7B6QjW1JtiRr732vfX2e53f/KP0el+RWq6u7Wt0tfV/n9LG66ql+nnZ31+f5bd+f0lojhBBCiLXNWu0LEEIIIcSVSWALIYQQ64AEthBCCLEOSGALIYQQ64AEthBCCLEOSGALIYQQ60BTAlsp9aRSalIpdeQyz79HKfXihY8fKaVubsZ5hRBCiOtFs1rYXwLuXeD5M8Cbtdb7gU8CX2zSeYUQQojrQqAZX0Rr/T2l1NYFnv9R3afPAAOL+brd3d1669bLflkhhBDimvLss89Oa62T8z3XlMBu0PuBr1/uSaXUg8CDAJs3b+bQoUNX67qEEEKIVaWUOne5567qpDOl1N3UAvujlztGa/1FrfVBrfXBZHLemwwhhBDiunPVWthKqf3AE8B9WuuZq3VeIYQQ4lpwVVrYSqnNwN8A79Van7ga5xRCCCGuJU1pYSulvgzcBXQrpYaB3weCAFrrx4FPAF3AF5RSAI7W+mAzzi2EEKKmWq0yPDxMqVRa7UsRVxCJRBgYGCAYDC76Nc2aJf7uKzz/AeADzTiXEEKI+Q0PD9PS0sLWrVu50Di6Is/TVFyPkG1hWYt7jVgerTUzMzMMDw+zbdu2Rb9uNWaJCyGEWAGlUmlRYe24Hq9M5vjuiSlOTub8xwd7Etx5Q5LBngQBWwphrhSlFF1dXUxNTTX0OglsIYS4hlwprMfTJf7LD88wnSsTCwXoa4uglEJrzfBckSe+f5ruRJjfeNM2etsiV+mqrz+L7QGpJ7dQQghxnRhPl3jsn16hWHEZ6IjRGQ/5waGUojMeYqAjRrHi8tg/vcJ4WsbC1xIJbCGEuA44rsd/+eEZLKXoiIcWPLYjHsJSiv/ywzM4rneVrlBciQS2ECuoXC7jefKGJ1bfK5M5pnPlK4a10REPMZ0r80rdGPdi2LbNgQMH/I9PfepTQG0G+yOPPMLg4CD79u3j9ttv5+tfrxW9zOVy/NZv/RY7duzgda97Hbfeeit//ud/ftlznD17lmg0yoEDB9i7dy8f/OAHF/w7O3r0KG95y1u44YYbGBwc5JOf/CRa64a+ryv50pe+xOjoaFO/5qUksIVYIY7jMD4+ztzc3GpfihB898QUsVBj05ZioQDfO9HYxKhoNMrzzz/vfzzyyCMA/N7v/R5jY2McOXKEI0eO8Hd/93dks1kAPvCBD9DR0cErr7zCz372M77xjW8wOzu74Hl27NjB888/z4svvshLL73E3/7t3857XLFY5Od//ud55JFHOHHiBC+88AI/+tGP+MIXvtDQ93UlEthCrGP5fJ6zZ88yPDy82pcirnOepzk5maMjtvg1vwAdsSCvTObwvOW1RguFAn/+53/On/zJnxAOhwHYsGEDv/Irv8KpU6f4yU9+wqOPPopl1SIpmUzy0Y9etoL1RQKBAG984xs5efLkvM//9//+33nTm97EPffcA0AsFuOxxx7zW/7//t//e37zN3+Tu+66i+3bt/O5z30OqP393n///dx8883s27ePr3zlKwA8++yzvPnNb+bWW2/lHe94B2NjY/zVX/0Vhw4d4j3veQ8HDhygWCzyyCOPsHfvXvbv38/DDz+89P959d9rU76KEOI1ZmdnmZmZIZfLsX///iXNChWiGSoXxqEb/R00x1dcj4hlL+o1xWKRAwcO+J9/7GMfY8+ePWzevJnW1tbXHH/06FFuvvlmP6wbVSgU+Pa3v80f/MEfzPv80aNHufXWWy96bMeOHeRyOTKZDADHjh3jO9/5Dtlsll27dvFbv/VbfOMb36Cvr49/+Id/ACCdTlOtVvnQhz7EV7/6VZLJJF/5ylf4+Mc/zpNPPsljjz3GZz7zGQ4ePMjs7Cz/83/+T44dO4ZSilQqtaTv7VIS2EKskEwmQ6VSwXEcXNclEJA/N7E6QhfWVGutGwptM84bamBNtukSr/fiiy8u+vX/4T/8B/7H//gfTE5OLtjFfOrUKQ4cOIBSigceeID77rtv3uMW+p7N4/fffz/hcJhwOExPTw8TExPcdNNNPPzww3z0ox/lXe96F3fccYffnf/2t78dANd12bhx42u+bmtrK5FIhA984APcf//9vOtd71r0978QeQcRYoXkcjnK5TJKKfL5PG1tbat9SeI6ZVmKnT0JRuaKdC5y0hnAXKHKYE9i2RXQdu7cyfnz58lms7S0tFz03N69e3nhhRfwPA/Lsvj4xz/Oxz/+cRKJxIJf04xhX8mNN97I9773vYseO336NIlEwr8W000PtUlzjuNwww038Oyzz/L000/zsY99jHvuuYdf/MVf5MYbb+THP/7xgucMBAL85Cc/4dvf/jZPPfUUjz32GP/0T/90xWu9EhnDFmKFZLNZyuUy1WqVQqGw2pcjrnNvviFJoeI09JpCxeHOG5a/zXEsFuP9738/H/7wh6lUKgCMjY3xX//rf2Xnzp0cPHiQ3/3d38V1XaBWsa1Zs7jf85738IMf/IBvfetbQK3L/sMf/jD/7t/9uwVfNzo6SiwW49d//dd5+OGHee6559i1axdTU1N+YFerVY4ePQpAS0uLP4kul8uRTqd55zvfyWc/+9lF3VgshrSwhVghhUIBpRSu60pgi1U32JOgOxFmLl9Z1NKuuXyF7kSYwZ6FW7qXunQM+9577+VTn/oUjz76KL/7u7/L3r17iUQixONxf9z5iSee4CMf+Qg7d+6ks7OTaDTKpz/96ca+wcuIRqN89atf5UMf+hC//du/jeu6vPe97+Whhx5a8HWHDx/mIx/5CJZlEQwG+dM//VNCoRB/9Vd/xYc//GHS6TSO4/Bv/+2/5cYbb+R973sfH/zgB4lGo3z961/ngQce8G88/tN/+k9N+V5Us9eiNdPBgwf1oUOHVvsyhGiY1pqnnnqKarVKqVTitttu43Wve91qX5a4xr388svs2bPnss+bSmdXKp4yl6/gac1DbxmU8qQraL6fl1Lq2cvtZild4kKsAM/zqFar/tiYbHco1oLetggPvWWQaMhmeK7AbL7idz1rrZnNVxieKxAN2RLWa5B0iQuxAqrVqj+JRilFsVhc7UsSAqiF9kfesYtXJnN878TURZXM1tpuXYcPH+a9733vRY+Fw2H+1//6X8s6dr2SwBZiBZRKJRzHQSmF53n+RBsh1oKAbbFnYyt7NrbiuS6VSolQKIJlL26t9dVy0003LXrCViPHrlcS2EKsgFKphOd5/ocEtlhTXAemjsHJb2FNn8Dv+E7ugh1vheRusCUe1hr5iQixAhzHoVqtkk6n/aVdQqwJmVF45nHIT0EoDq0DoBRoDakh+PHnIZ6EN3wQWvtW+2pFndUfpBDiGmS6xD3Pw3EcyuXyal+SELWw/t5noFqA9s0Q66qFNdT+G+uqPV4t1I7LrOxmFqIxEthCrIBisYjjONi2jVJKAlusPteptayVVQvmhcS6asc983jtdWJNkMAWYgWYMexAIIBlWTKGLVbf1LFaN/iVwtqIdUF+sva6Bqy3/bDL5TJve9vbOHDgAF/5ylf4/ve/z4033ujvutWIz372sytaJEkCW4gVYJZ1KaXQWvufC7FqTn6rNmbdiGAcTn27oZest/2wf/azn1GtVnn++ef51//6X/Pf/tt/4+GHH+b5558nGo029L2vi8BWSj2plJpUSh25zPNKKfU5pdRJpdSLSqlbmnFeIdaqcrns7xLkui6u6+I40rUoVonnwfQJiHY29rpYF0wdr71+GdbqftiTk5P8+q//Os8//zwHDhzgz/7sz/jLv/xL/uAP/oD3vOc9jI2Nceedd3LgwAH27dvH97//fQD+8R//kZ/7uZ/jlltu4V/9q39FLpfjc5/7HKOjo9x9993cfffduK7L+973Pvbt28dNN93UlPKkzZol/iXgMeAvLvP8fcDghY/XA3964b9CXJNMYIfDYZRSOI5DpVIhFFr8TklCNI17YUim0T3ZlQJ07fXW4qqeraf9sCORCE888QSf+cxn+Pu//3sAfvzjH/Oud72LX/7lX+aP/uiPeMc73sHHP/5xf0+A6elpHn30Ub71rW8Rj8f59Kc/zR//8R/ziU98gj/+4z/mO9/5Dt3d3Tz77LOMjIxw5EitHduMPbGbEtha6+8ppbYucMgDwF/o2qDBM0qpdqXURq31WDPOL8RaUz/JzGzXJ+PYYtXYF24UtW4stLUG1KuvX4T1uB/25dx222385m/+JtVqlV/4hV/gwIEDfPe73+Wll17iTW96EwCVSoWf+7mfe81rt2/fzunTp/nQhz7E/fff77fwl+NqrcPuB4bqPh++8NhrAlsp9SDwIMDmzZuvysUJ0WyVSq1Gs2k1eJ4nM8XF6rEs6L4B0sOLn3QGUJipFVNZYuvXWOv7YV/OnXfeyfe+9z3+4R/+gfe+97185CMfoaOjg7e//e18+ctfXvC1HR0dvPDCC3zzm9/k85//PH/5l3/Jk08+ecXrXcjVmnQ2323MvNuEaa2/qLU+qLU+mEwufx9WIVaDaU0Hg0Esy8LzPNkARKyunW+DSr6x11Tztcpny7Qe98MGOHfuHD09Pfybf/NveP/7389zzz3HG97wBn74wx/6Y+aFQoETJ04AF++JPT09jed5/NIv/RKf/OQnee6555b9vVytwB4GNtV9PgDIinxxzTJ “text/plain”: [ “
” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [

5.15. Extract and Visualize Individual Heartbeats 83 NeuroKit2, Release 0.0.39

“# Visualize P-wave boundariesn”, “signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type=’bounds_P’)” ] }, { “cell_type”: “code”, “execution_count”: 19, “metadata”: {}, “outputs”: [ { “data”: { “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAewAAAExCAYAAAC+ipGRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXzc913v+9d3frNrJI2kGVmW5EWOldiO4ziJky4cQtM2TUqB9rJd2nRJt1Bom977uIWkh8KBLhA4HCjQ9oS0lNJHKQ20wMkp4ZbbvU1JSVKcxE68xav20TL7+pvf9/4x/v4ytiV5RhpZsvx5Ph5+2Jr5jX4/WdK8f9/t81Vaa4QQQgixtnlW+wKEEEIIcXES2EIIIcRlQAJbCCGEuAxIYAshhBCXAQlsIYQQ4jIggS2EEEJcBloS2EqpzymlppRSBxZ4/i6l1DNn//xQKXV9K84rhBBCXCla1cL+PHDnIs+fAH5Ka70H+CjwUIvOK4QQQlwRvK34JFrr7ymlti7y/A/rPnwcGGzk88ZiMb1164KfVgghhFhXnnrqqWmtdXy+51oS2E16J/CvCz2plLoHuAdg8+bNPPnkk5fquoQQQohVpZQ6tdBzl3TSmVLqNmqBfd9Cx2itH9Ja79Na74vH573JEEIIIa44l6yFrZTaA3wWeK3WeuZSnVcIIYRYDy5JC1sptRn4R+AtWusjl+KcQgghxHrSkha2UurvgFcAMaXUCPDfAB+A1vpB4HeAHuDTSikAW2u9rxXnFkIIAZVKhZGREYrF4mpfimhAMBhkcHAQn8/X8GtaNUv8jRd5/l3Au1pxLiGEEBcaGRmhvb2drVu3crZhdFGOoylXHfyWB4+nsdeI5dNaMzMzw8jICENDQw2/bjVmiQshhGixYrHYUFjbVYejU1m+eyTBsams+/hwb4Rbr44z3BvBa0kRzJWklKKnp4dEItHU6ySwhRBinbhYWE+kivz1YyeYzpYI+730dwZRSqG1ZmSuwGe/f5xYJMDbf2KIvs7gJbrqK1OjvSD15DZKCCGuABOpIp/81lEK5SqDXWG62/xuaCil6G7zM9gVplCu8slvHWUiJWPha40EthBCrHN21eGvHzuBRym62vyLHtvV5sejFH/92AnsqnOJrlA0QgJbiBVUKpVwHHnTE6vr6FSW6WzpomFtdLX5mc6WOFo3xt0Iy7LYu3ev++eBBx4AajPY77//foaHh9m9eze33HIL//qvtYKX2WyWX/u1X+Oqq67ihhtu4KabbuIzn/nMguc4efIkoVCIvXv3smvXLt761rdSqVQWva4f/OAH3HLLLezYsYMdO3bw0EOt387iE5/4BPl8vuWft56MYQuxQmzbZmJigkgkQk9Pz2pfjriCffdIgrC/ubf7sN/L944k2Lmxo+HXhEIh9u/ff8Hjv/3bv834+DgHDhwgEAgwOTnJd7/7XQDe9a53sW3bNo4ePYrH4yGRSPC5z31u0fNcddVV7N+/n2q1yu23387f//3fc9ddd8177MTEBG9605v453/+Z2688Uamp6e54447GBgY4HWve13DX9vFfOITn+DNb34z4XC4ZZ/zfNLCFmKF5HI5Tp48ycjIyGpfiriCOY7m2FSWrnDj630BusI+jk5lcRy9rPPn83k+85nP8Bd/8RcEAgEANmzYwC//8i/zwgsv8B//8R987GMfw+OpxVE8Hue++xasXn0Oy7K45ZZbGB0dXfCYT33qU9x9993ceOONAMRiMf7oj/7Ibf3ffffd3Hvvvbz85S9n27ZtfOUrXwFgfHycW2+9lb1797J7926+//3vA/Bv//ZvvOxlL+PGG2/kl37pl8hms/z5n/85Y2Nj3Hbbbdx2221Uq1Xuvvtudu/ezXXXXcef/umfLu0/7zwS2EKskNnZWXetpdbLe9MTYqnKZ8ehm52VbI4vNzGOXSgUzukSf/jhhzl27BibN2+mo+PClvrBgwe5/vrr3bBuVrFY5Ec/+hF33rnw7s4HDx7kpptuOuexffv2cfDgQffj8fFxfvCDH/C1r32N+++/H4AvfelL3HHHHezfv5+nn36avXv3Mj09zcc+9jG+8Y1v8OMf/5h9+/bxJ3/yJ9x777309/fz7W9/m29/+9vs37+f0dFRDhw4wLPPPsvb3/72JX1955MucSFWSDqdplwuY9s21WoVr1d+3cSl5z+7plpr3VRom5tMfxNrsufrEn/mmWcafv3HP/5x/uEf/oGpqSnGxsYWPO6FF15g7969HD16lF/8xV9kz549Cx670Ndd/9gb3vAGPB4Pu3btYnJyEoCbb76Zd7zjHVQqFd7whjewd+9evvvd7/Lcc8/xEz/xEwCUy2Ve9rKXXfC5t23bxvHjx3n/+9/P6173Ol7zmtc0/H+wGGlhC7FCstkspVKJYrFILpdb7csRVyiPR7G9N8JcfvGJWeeby1cY7o0suwLa9u3bOX36NJlM5oLndu3axdNPP+1OzPyt3/ot9u/fTzqdXvRzmjHsY8eO8fjjj/PII48seOy11157wTbNTz31FLt27XI/Nl318OKNyq233sr3vvc9BgYGeMtb3sIXvvAFtNbcfvvt7N+/n/379/Pcc8/xV3/1Vxecs6uri6effppXvOIVfOpTn+Jd72pNoU8JbCFWSCaToVQqUalUVnz2qBCL+amr4+TLdlOvyZdtbr16+Vsch8Nh3vnOd3LvvfdSLpeBWhf0F7/4RbZv386+ffv48Ic/TLVaBWrd3I0OIW3cuJEHHniAP/iDP1jwmPe+9718/vOfd1v+MzMz3Hffffzmb/7mop/71KlT9Pb28u53v5t3vvOd/PjHP+alL30pjz32GMeOHQNq4/NHjtT2s2pvb3dvSqanp3Ech1/4hV/gox/9KD/+8Y8b+nouRgJbiBWSz+dRSlGtViWwxaoa7o0QiwSYy5UbOn4uVyYWCTDcG2nqPOePYZvx4I997GPE43F27drF7t27ecMb3kA8XrsZ+OxnP8vMzAzbt2/npptu4tWvfjV/+Id/2PA53/CGN5DP591JYefbuHEjX/ziF3n3u9/Njh07ePnLX8473vEOfvZnf3bRz/ud73yHvXv3csMNN/DVr36VD3zgA8TjcT7/+c/zxje+kT179vDSl76UQ4cOAXDPPffw2te+lttuu43R0VFe8YpXsHfvXu6+++5FbyiaodbyZJh9+/bp87syhLgcaK358pe/TKVSoVgscvPNN3PDDTes9mWJdez5559n586dCz5vKp1drHjKXK6MozXve+WwlCddYfN9z5RSTy20m6W0sIVYAY7jUKlU3LEx2fJQrLa+ziDve+UwIb/FyFye2VzZ7XrWWjObKzMylyfktySs1yiZtirECqhUKjiOg8fjQSlFoVBY7UsSgr7OIL9xxzUcncryvSOJcyqZrbXdup599lne8pa3nPNYIBDgRz/60bzHf/3rX79g/fbQ0BD/9E//tGLXeKlJYAuxAorFIrZto5TCcRx3so0Qq81redi5sYOdGztwqlXK5SJ+fxCPZa32pZ3juuuum7dq2kLuuOMO7rjjjhW8otUngS3ECigWiziO4/6RwBZrRtWGxCE49g0800dwO77j18BVr4L4DrAkGtYi+a4IsQJs26ZSqZBKpdylXUKsuvQYPP4g5BLgb4OOQVAKtIbkGfj3T0FbHF76HujoX+2rFedZ/YEKIdYh0yXuOA62bVMqlVb7ksSVLj0G3/tjqOQhuhnCPbWwhtrf4Z7a45V87bj0wpXGxOqQwBZiBRQKBWzbxrIslFIS2GJ1Ve1ay1p5asG8mHBP7bjHH6y9TqwZEthCrAAzhu31evF4PDKGLVZX4lCtG/xiYW2EeyA3VXtdEy7H/bATiQQveclLuOGGG/j+97/PP/zDP7Bz505uu+22pr52gN///d9v+jXNkDFsIVaAWdallEJrfc4yLyEuuWPfqI1ZN8PXBi98E/p2N/ySy3E/7G9+85vs2LGDv/mbvwHgzjvv5NOf/vSSA/u//tf/2vTrGtWSdw+l1OeUUlNKqQMLPK+UUn+ulDqmlHpGKXVjK84rxFpVKpXcXYKq1SrVahXblu5FsQocB6aPQKi7udeFeyBxuPb6ZVjL+2Hv37+f3/zN3+TRRx9l7969/N7v/R4/+MEPeM973sNv/MZvcPDgQW655Rb27t3Lnj17OHr0KABf/OIX3cd/9Vd/lWq1yv333++WZr3rrrvI5XK87nWv4/rrr2f37t08/PDDy/lvBFrXwv488EngCws8/1pg+OyflwD/8+zfQqxLJrADgQBKKWzbplwu4/cvXBJSiBVRPTsc0+R+2LXjde31nsaqnpnAMj70oQ+xc+fOFd8P+8/+7M8WPObgwYO87W1vO+cxsx/23r17+chHPsKTTz7JJz/5SQC+/e1v88d//Mfs27eP97///XzgAx/grrvuolwuU61Wef7553n44Yd57LHH8Pl8/Pqv/zp/+7d/ywMPPMAnP/lJt4fhq1/9Kv39/fzLv/wLAKlUaklfY72WBLbW+ntKqa2LHPJ64Au6VgfvcaVUVCm1UWs93orzC7HW1E8ysyzLDWwhLjnr7E2i1s2FttaAevH1Dbhc98NeyMte9jI+/vGPMzIyws///M8zPDzMN7/5TZ566iluvvlmoHaT0tvbe8Frr7vuOj74wQ9y33338TM/8zP85E/+5EXPdzGXakBtADhT9/HI2ccuoJS6Ryn1pFLqyUQicUkuTohWK5drdZpNy8FxHJkpLlaHxwOxq6Ew29zr8jO1YirLnHdxOeyHvZA3velNPPLII4RCIe644w6+9a1vobXmbW97m7sn9uHDh/nd3/3dC1579dVX89RTT3HdddfxoQ99iI985CMXPd/FXKrAnu9WZt5twrTWD2mt92mt95nt14S43JjWtM/nw+Px4DiObA “text/plain”: [ “

” ] }, “metadata”: { “needs_background”: “light” }, “output_type”: “display_data” } ], “source”: [ “# Visualize R-wave boundariesn”, “signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type=’bounds_R’)” ] }, { “cell_type”: “markdown”, “metadata”: {}, “source”: [ “Visually, from the plots above, the delineated outputs of DWT appear to be more ac- curate than CWT, especially for the P-peaks and P-wave boundaries.n”, “n”, “Overall, for this signal, the peak and DWT methods seem to be superior to the CWT.” ] } ], “metadata”: { “kernelspec”: { “display_name”: “Python 3”, “language”: “python”, “name”: “python3” }, “language_info”: { “codemirror_mode”: { “name”: “ipython”, “version”: 3 }, “file_extension”: “.py”, “mimetype”: “text/x-python”, “name”: “python”, “nbcon- vert_exporter”: “python”, “pygments_lexer”: “ipython3”, “version”: “3.7.3” } }, “nbformat”: 4, “nbformat_minor”: 2 }

84 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.16 How to create epochs

So, in your experiment, participants undergo a number of trials (events) and these events are possibly of different conditions. And you are wondering how can you locate these events on your signals and perhaps make them into epochs for future analysis? This example shows how to use Neurokit to extract epochs from data based on events localisation. In case you have multiple data files for each subject, this example also shows you how to create a loop through the subject folders and put the files together in an epoch format for further analysis.

[1]: # Load NeuroKit and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt %matplotlib inline

[10]: plt.rcParams['figure.figsize']=[15,5] # Bigger images

In this example, we will use a short segment of data which has ECG, EDA and respiration (RSP) signals.

5.16.1 One signal with multiple event markings

[11]: # Retrieve ECG data from data folder (sampling rate= 1000 Hz) data= nk.data("bio_eventrelated_100hz")

Besides the signal channels, this data also has a fourth channel which consists of a string of 0 and 5. This is a binary marking of the Digital Input channel in BIOPAC. Let’s visualize the event-marking channel below.

[12]: # Visualize the event-marking channel plt.plot(data['Photosensor']) [12]: []

Depends on how you set up your experiment, the onset of the event can either be marked by signal going from 0 to 5 or vice versa. Specific to this data, the onsets of the events are marked where the signal in the event-marking channel goes from 5 to 0 and the offsets of the events are marked where the signal goes from 0 to 5. As shown in the above figure, there are four times the signal going from 5 to 0, corresponding to the 4 events (4 trials) in this data.

5.16. How to create epochs 85 NeuroKit2, Release 0.0.39

There were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each condition had 2 trials. The following list is the condition order.

[13]: condition_list=["Negative","Neutral","Neutral","Negative"]

Before we can epoch the data, we have to locate the events and extract their related information. This can be done using Neurokit function events_find().

[14]: # Find events events= nk.events_find(event_channel=data["Photosensor"], threshold_keep='below', event_conditions=condition_list)

events [14]: {'onset': array([ 1024, 4957, 9224, 12984]), 'duration': array([300, 300, 300, 300]), 'label': array(['1', '2', '3', '4'], dtype='

The output of events_find() gives you a dictionary that contains the information of event onsets, event duration, event label and event condition. As stated, as the event onsets of this data are marked by event channel going from 5 to 0, the threshold_keep is set to below. Depends on your data, you can customize the arguments in events_find() to correctly locate the events. You can use the events_plot() function to plot the events that have been found together with your event channel to confirm that it is correct.

[15]: plot= nk.events_plot(events, data['Photosensor'])

Or you can visualize the events together with the all other signals.

[16]: plot= nk.events_plot(events, data)

86 Chapter 5. Examples NeuroKit2, Release 0.0.39

After you have located the events, you can now create epochs using the NeuroKit epochs_create() function. However, we recommend to process your signal first before cutting them to smaller epochs. You can read more about processing of physiological signals using NeuroKit in Custom your Processing Pipeline Example.

[19]: # Process the signal df, info= nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_

˓→rate=100)

Now, let’s think about how we want our epochs to be like. For this example, we want:

1. Epochs to start *1 second before the event onset*

2. Epochs to end *6 seconds* afterwards

These are passed into the epochs_start and epochs_end arguments, respectively. Our epochs will then cover the region from -1 s to +6 s relative to the onsets of events (i.e., 700 data points since the signal is sampled at 100Hz).

[20]: # Build and plot epochs epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_

˓→end=6)

And as easy as that, you have created a dictionary of four dataframes, each correspond to an epoch of the event. Here, in the above example, all your epochs have the same starting time and ending time, specified by epochs_start and epochs_end. Nevertheless, you can also pass a list of different timings to these two ar- guments to customize the duration of the epochs for each of your events.

5.16.2 One subject with multiple data files

In some experimental designs, instead of having one signal file with multiple events, each subject can have multiples files where each file is the record of one event. In the following example, we will show you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. Firstly, let’s say your data is arranged as the following where each subject has a folder and in each folder there are multiple data files corresponding to different events:

5.16. How to create epochs 87 NeuroKit2, Release 0.0.39

[Experiment folder] | Data | | | Subject_001/ | | event_1.[csv] | | event_2.[csv] | | |__ ...... | Subject_002/ | event_1.[csv] | event_2.[csv] | |__ ...... analysis_script.py

The following will illustrate how your analysis script might look like. Try to re-create such data structure and the analysis script in your computer! Now, in our analysis scripts, let’s load the necessary packages:

[21]: # Load packages import pandas as pd import os

Assuming that your working directory is now at your analysis script, and you want to read all the data files of Subject_001. Your analysis script should look something like below:

[ ]: # Your working directory should be at Experiment folder participant='Subject_001'

sampling_rate=100

# List all data files in Subject_001 folder all_files= os.listdir('Data/'+ participant)

# Create an empty directory to store your files (events) epochs={}

# Loop through each file in the subject folder fori, file in enumerate(all_files): # Read the file data= pd.read_csv('Data/'+ participant+'/'+ file) # Add a Label column (e.g Label 1 for epoch 1) data['Label']= np.full(len(data), str(i+1)) # Set index of data to time in seconds index= data.index/sampling_rate data= data.set_index(pd.Series(index)) # Append the file into the dictionary epochs[str(i+1)]= data

epochs

And tah dah! You now should have a dictionary called epochs that resembles the output of NeuroKit epochs_create(). Each DataFrame in the epochs corresponds to an event (a trial) that Subject_001 underwent. The epochs is now ready for further analysis!

88 Chapter 5. Examples NeuroKit2, Release 0.0.39

5.17 Complexity Analysis of Physiological Signals

A complex system, can be loosely defined as one that comprises of many components that interact with each other. In science, this approach is used to investigate how relationships between a system’s parts results in its collective behaviours and how the system interacts and forms relationships with its environment. In recent years, there has been an increase in the use of complex systems to model physiological systems, such as for medical purposes where the dynamics of physiological systems can distinguish which systems and healthy and which are impaired. One prime example of how complexity exists in physiological systems is heart-rate variability (HRV), where higher complexity (greater HRV) has been shown to be an indicator of health, suggesting enhanced ability to adapt to environmental changes. Although complexity is an important concept in health science, it is still foreign to many health scientists. This tutorial aims to provide a simple guide to the main tenets of compexity analysis in relation to physiological signals.

5.17.1 Basic Concepts

Definitions

Complex systems are examples of nonlinear dynamical systems. A dynamical system can be described by a vector of numbers, called its state, which can be represented by a point in a phase space. In terms of physiology, this vector might include values of variables such as lung capacity, heart rate and blood pressure. This state aims to provide a complete description of the system (in this case, health) at some point in time. The set of all possible states is called the state space and the way in which the system evolves over time (e.g., change in person’s health state over time) can be referred to as trajectory. After a sufficiently long time, different trajectories may evolve or converge to a common subset of state space called an attractor. The presence and behavior of attractors gives intuition about the underlying dynamical systems. This attractor can be a fixed-point where all trajectories converge to a single point (i.e., homoeostatic equilibrium) or it can be periodic where trajectories follow a regular path (i.e., cyclic path).

Time-delay embedding

Nonlinear time-series analysis is used to understand, characterize and predict dynamical information about human physiological systems. This is based on the concept of state-space reconstruction, which allows one to reconstruct the full dynamics of a nonlinear system from a single time series (a signal). One standard method for state-space reconstruction is time-delay embedding (or also known as delay-coordinate em- bedding). It aims to identify the state of a system at a particular point in time by searching the past history of obser- vations for similar states, and by studying how these similar states evolve, in turn predict the future course of the time series. In conclusion, the purpose of time-delay embeddings is to reconstruct the state and dynamics of an unknown dynam- ical system from measurements or observations of that system taken over time. In this gif here, you can see how the phase space is constructed by plotting delayed signals against the original signal (where each time series is an embedded version i.e., delayed version of the original). Each point in the 3D reconstruction can be thought of as a time segment, with different points capturing different segments of history of variable X. Credits go to this short illustration.

5.17. Complexity Analysis of Physiological Signals 89 NeuroKit2, Release 0.0.39

Embedding Parameters

For the reconstructed dynamics to be identical to the full dynamics of the system, some basic parameters need to be optimally determined for time-delayed embedding: • Time delay: tau, – A measure of time that sets basic delay – Generates the respective axes of the reconstruction: x(t), x(t-tau), x(t-2tau)... – E.g., if tau=1, the state x(t) would be plotted against its prior self x(t-1) – If is too small, constructed signals are too much alike and if too large, the reconstructed trajectory will show connections between states very far in the past and to those far in the future (no relationship), which might make the reconstruction extremely complex • Embedding dimension, m – Number of vectors to be compared (i.e. no. of additional signals of time delayed values of tau) – Dictates how many axes will be shown in the reconstruction space i.e. how much of the system’s history is shown – Dimensionality must be sufficiently high to generate relevant information and create a rich history of states over time, but also low enough to be easily understandable • Tolerance threshold, r – Tolerance for accepting similar patterns Visualize Embedding This is how a typical sinusoidal signal looks like, when embedded in 2D and 3D respectively.

90 Chapter 5. Examples NeuroKit2, Release 0.0.39

Using NeuroKit There are different methods to guide the choice of parameters. In NeuroKit, you can use nk. complexity_optimize() to estimate the optimal parameters, including time delay, embedding dimension and tolerance threshold. import neurokit2 as nk signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) parameters= nk.complexity_optimize(signal, show=True) parameters >>>{'delay': 13,'dimension':6,'r': 0.014}

5.17. Complexity Analysis of Physiological Signals 91 NeuroKit2, Release 0.0.39

In the above example, the optimal time delay is estimated using the Mutual Information method (Fraser & Swinney, 1986), the optimal embedding dimension is estimated using the Average False Nearest Neighbour (Cao, 1997) and the optimal r is obtained using the Maximum Approximate Entropy (Lu et al., 2008). These are the default methods in nk.complexity_optimize(). Nevertheless, you can specify your preferred method via the method arguments. More of these methods can be read about in this chapter here.

5.17.2 Entropy as measures of Complexity

The complexity of physiological signals can be represented by the entropy of these non-linear, dynamic physiological systems. Entropy can be defined as the measure of disorder in a signal.

Shannon Entropy (ShEn)

• call nk.entropy_shannon()

Approximate Entropy (ApEn)

• Quantifies the amount of regularity and the unpredictability of fluctuations over time-series data. • Advantages of ApEn: lower computational demand (can be designed to work for small data samples i.e. less than 50 data points and can be applied in real time) and less sensitive to noise. • Smaller values indicate that the data is more regular and predictable, and larger values corresponding to more complexity or irregularity in the data. • call nk.entropy_approximate() Examples of use

92 Chapter 5. Examples NeuroKit2, Release 0.0.39

Refer- Signal Parameters Findings ence Caldirola 17min breath-by-breath m=1, r=0.2 Panic disorder patients showed higher ApEn in- et al. recordings of respiration dexes in baseline RSP patterns (all parameters) (2004) parameters than healthy subjects Burioka 30 mins of Respiration, m=2, r=0.2, =1.1s Lower ApEn of respiratory movement and EEG et al. 20s recordings of EEG for respiration, 0.09s in stage IV sleep than other stages of conscious- (2003) for EEG ness Boettger 64s recordings of QT and m=2, r=0.2 Higher ratio of ApEn(QT) to ApEn(RR) for et al. RR intervals higher intensities of exercise, reflecting sympa- (2009) thetic activity Taghavi 2mis recordings of EEG m=2, r=0.1 Higher ApEn of normal subjects than et al. schizophrenic patients particularly in limbic (2011) areas of the brain

Sample Entropy (SampEn)

• A modification of approximate entropy • Advantages over ApEn: data length independence and a relatively trouble-free implementation. • Large values indicate high complexity whereas smaller values characterize more self-similar and regular signals. • call nk.entropy_sample() Examples of use

Refer- Signal Pa- Findings ence ram- eters Lake 25min recordings m=3, SampEn is lower in the course of neonatal sepsis and sepsislike illness et al. of RR intervals r=0.2 (2002) Lake 24h recordings of m=1, In patients over 4o years old, SampEn has high degrees of accuracy et al. RR intervals r=to in distinguishing atrial fibrillation from normal sinus rhythm in 12-beat (2011) vary calculations performed hourly Estrada EMG diaphragm m=1, fSampEn (fixed SampEn) method to extract RSP rate from respiratory et al. signal r=0.3 EMG signal (2015) Kapidzic RR intervals and m=2, During paced breathing, significant reduction of SampEn(Resp) and et al. its corresponding r=0.2 SampEn(RR) with age in male subjects, compared to smaller and non- (2014) RSP signal significant SampEn decrease in females Abásolo 5min recordings m=1, Alzheimer’s Disease patients had lower SampEn than controls in pari- et al. of EEG in 5 r=0.25 etal and occipital regions (2006) second epochs

5.17. Complexity Analysis of Physiological Signals 93 NeuroKit2, Release 0.0.39

Fuzzy Entropy (FuzzyEn)

• Similar to ApEn and SampEn • call nk.entropy_fuzzy()

Multiscale Entropy (MSE)

• Expresses different levels of either ApEn or SampEn by means of multiple factors for generating multiple time series • Captures more useful information than using a scalar value produced by ApEn and SampEn • call nk.entropy_multiscale()

5.17.3 Detrended Fluctuation Analysis (DFA)

5.18 Analyze Electrooculography EOG data (eye blinks, saccades, etc.)

[26]: # This is temporary to load the dev version of NK, needs to be removed once it's in

˓→master import os os.chdir('D:/Dropbox/RECHERCHE/N/NeuroKit/') # ------import numpy as np import pandas as pd import matplotlib.pyplot as plt import neurokit2 as nk

plt.rcParams['figure.figsize']=[14,8] # Increase plot size

5.18.1 Explore the EOG signal

Let’s load the example dataset corresponding to a vertical EOG signal.

[47]: eog_signal= pd.read_csv("data/eog_100hz.csv") eog_signal= nk.as_vector(eog_signal) # Extract the only column as a vector nk.signal_plot(eog_signal)

94 Chapter 5. Examples NeuroKit2, Release 0.0.39

Let’s zoom in to some areas where clear blinks are present.

[48]: nk.signal_plot(eog_signal[100:1700])

5.18. Analyze Electrooculography EOG data (eye blinks, saccades, etc.) 95 NeuroKit2, Release 0.0.39

5.18.2 Clean the signal

We can now filter the signal to remove some noise and trends.

[49]: eog_cleaned= nk.eog_clean(eog_signal, sampling_rate=100, method='neurokit')

Let’s visualize the same chunk and compare the clean version with the original signal.

[50]: nk.signal_plot([eog_signal[100:1700], eog_cleaned[100:1700]])

5.18.3 Detect and visualize eye blinks

We will nor run a peak detection algorithm to detect peaks location.

[54]: blinks= nk.eog_findpeaks(eog_cleaned, sampling_rate=100, method="mne") blinks [54]: array([ 277, 430, 562, 688, 952, 1378, 1520, 1752, 3353, 3783, 3928, 4031, 4168, 4350, 4723, 4878, 5213, 5365, 5699, 5904, 6312, 6539, 6714, 7224, 7382, 7553, 7827, 8014, 8154, 8312, 8626, 8702, 9140, 9425, 9741, 9948, 10142, 10230, 10368, 10708, 10965, 11256, 11683, 11775], dtype=int64)

[77]: events= nk.epochs_create(eog_cleaned, blinks, sampling_rate=100, epochs_start=-0.3,

˓→epochs_end=0.7)

[79]: data= nk.epochs_to_array(events) # Convert to 2D array data= nk.standardize(data) # Rescale so that all the blinks are on the same scale

# Plot with their median (used here as a robust average) (continues on next page)

96 Chapter 5. Examples NeuroKit2, Release 0.0.39

(continued from previous page) plt.plot(data, linewidth=0.4) plt.plot(np.median(data, axis=1), linewidth=3, linestyle='--', color="black") [79]: []

[ ]:

5.19 Fit a function to a signal

This small tutorial will show you how to use Python to estimate the best fitting line to some data. This can be used to find the optimal line passing through a signal.

[4]: # Load packages import numpy as np import pandas as pd import scipy.optimize import matplotlib.pyplot as plt import neurokit2 as nk

plt.rcParams['figure.figsize']=[14,8] # Increase plot size

5.19. Fit a function to a signal 97 NeuroKit2, Release 0.0.39

5.19.1 Fit a linear function

We will start by generating some random data on a scale from 0 to 10 (the x-axis), and then pass them through a function to create its y values.

[7]:x= np.random.uniform(0., 10., size=100) y= 3. * x+ 2.+ np.random.normal(0., 10., 100) plt.plot(x,y,'.') [7]: []

No in this case we know that the best fitting line will be a linear function (i.e., a straight line), and we want to find its parameters. A linear function has two parameters, the intercept and the slope. First, we need to create this function, that takes some x values, the parameters, and return the y value.

[9]: def function_linear(x, intercept, slope): y= intercept+ slope * x returny

Now, using the power of scipy, we can optimize this function based on our data to find the parameters that minimize the least square error. It just needs the function and the data’s x and y values.

[16]: params, covariance= scipy.optimize.curve_fit(function_linear,x,y) params [16]: array([5.16622964, 2.46102004])

So the optimal parameters (in our case, the intercept and the slope) are returned in the params object. We can unpack these parameters (using the star symbol *) into our linear function to use them, and create the predicted y values to our x axis.

[17]: fit= function_linear(x, *params)

(continues on next page)

98 Chapter 5. Examples NeuroKit2, Release 0.0.39

(continued from previous page) plt.plot(x,y,'.') plt.plot(x, fit,'-') [17]: []

5.19.2 Non-linear curves

We can also use that to approximate non-linear curves.

[22]: signal= nk.eda_simulate(sampling_rate=50) nk.signal_plot(signal)

5.19. Fit a function to a signal 99 NeuroKit2, Release 0.0.39

In this example, we will try to approximate this Skin Conductance Response (SCR) using a gamma distribution, which is quite a flexible distribution defined by 3 parameters (a, loc and scale). On top of these 3 parameters, we will add 2 more, the intercept and a size parameter to give it more flexibility.

[24]: def function_gamma(x, intercept, size,a, loc, scale): gamma= scipy.stats.gamma.pdf(x,a=a, loc=loc, scale=scale) y= intercept+(size * gamma) returny

We can start by visualizing the function with a “arbitrary” parameters:

[34]:x= np.linspace(0, 20, num=500) y= function_gamma(x, intercept=1, size=1,a=3, loc=0.5, scale=0.33)

nk.signal_plot(y)

100 Chapter 5. Examples NeuroKit2, Release 0.0.39

Since these values are already a good start, we will use them as “starting point” (through the p0 argument), to help the estimation algorithm converge (otherwise it could never find the right combination of parameters).

[35]: params, covariance= scipy.optimize.curve_fit(function_gamma,x, signal, p0=[1,1,3,

˓→0.5, 0.33]) params [35]: array([0.9518406 , 2.21035698, 1.05863983, 2.16432609, 1.97046573])

[36]: fit= function_gamma(x, *params)

plt.plot(x, signal,'.') plt.plot(x, fit,'-') [36]: []

5.19. Fit a function to a signal 101 NeuroKit2, Release 0.0.39

102 Chapter 5. Examples CHAPTER SIX

RESOURCES

Contents:

6.1 Recording good quality signals

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing (see this tutorial).

This tutorial is a work in progress.

6.1.1 Recording

• Electrocardiogram (ECG) electrodes placement • Facial EMG electrodes placement • RSP belt placement • Best Practices for Collecting Physiological Data

6.1.2 Signal quality

• Improving Data Quality of ECG • Improving Data Quality of EDA

6.1.3 Artifacts and Anomalies

• Identifying and Handling Cardiac Arrhythmia

103 NeuroKit2, Release 0.0.39

6.2 What software for physiological signal processing

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing (see this tutorial).

If you’re here, it’s probably that you have (or plan to have) some physiological data (aka biosignals, e.g., ECG for cardiac activity, RSP for respiration, EDA for electrodermal activity, EMG for muscle activity etc.), that you plan to process and analyze these signals and that you don’t know where to start. Whether you are an undergrad, a master or PhD student, a postdoc or a full professor, you’re at the right place. So let’s discuss a few things to consider to best decide on your options.

6.2.1 Software vs. programming language (packages)

In this context, a software would be a program that you download, install (through some sort of .exe file), and start similarly to most of the other programs installed on our computers. They are appealing because of their (apparent) simplicity and familiarity of usage: you can click on icons and menus and you can see all the available options, which makes it easy for exploration. In a way, it also feels safe, because you can always close the software, press “do not save the changes”, and start again. Unfortunately, when it comes to science, this comes with a set of limitations; they are in general quite expensive, are limited to the set of included features (it’s not easy to use one feature from one software, and then another from another one), have a slow pace of updates (and thus often don’t include the most recent and cutting-edge algorithms, but rather well-established ones), and are not open-source (and thus prevent to run fully reproducible analyses). • Software for biosignals processing – AcqKnowledge: General physiological analysis software (ECG, PPG, RSP, EDA, EMG, . . . ). – Kubios: Heart-rate variability (HRV). Unfortunately, it’s the prefered option for many researchers. Why? For PIs, it’s usually because they are established tools backed by some sort of company behind them, with experts, advertisers and sellers that do their job well. The companies also offer some guaranty in terms of training, updates, issues troubleshooting, etc. For younger researchers starting with physiological data analysis, it’s usually because they don’t have much (or any) experience with pro- gramming languages. They feel like there is already a lot of things to learn on the theorethical side of physiological signal processing, so they don’t want to add on top of that, learning a programming language. However, it is important to understand that you don’t necessarily have to know how to code to use some of the packages. Moreover, some of them include a GUI (see below), which makes them very easy to use and a great alternative to the software mentioned above.

Note: TLDR; Closed proprietary software, even though seemlingly appealing, might not a good investement of time or money.

104 Chapter 6. Resources NeuroKit2, Release 0.0.39

6.2.2 GUI vs. code

TODO. • Packages with a GUI – Ledalab: EDA (Matlab). – PsPM: Primarly EDA (Matlab). – biopeaks: ECG, PPG (Python). – mnelab: EEG (Python).

Note: TLDR; While GUIs can be good alternatives and a first step to dive into programming language-based tools, coding will provide you with more freedom, incredible power and the best fit possible for your data and issues.

6.2.3 Matlab vs. Python vs. R vs. Julia

What is the best programming language for physiological data analysis? Matlab is the historical main contender. However. . . TODO. • Python-based packages – NeuroKit2: ECG, PPG, RSP, EDA, EMG. – BioSPPy: ECG, RSP, EDA, EMG. – PySiology: ECG, EDA, EMG. – pyphysio: ECG, PPG, EDA. – HeartPy: ECG. – hrv: ECG. – hrv-analysis: ECG. – pyhrv: ECG. – py-ecg-detectors: ECG. – Systole: PPG. – eda-explorer: EDA. – Pypsy: EDA. – MNE: EEG. – tensorpac: EEG. – PyGaze: Eye-tracking. – PyTrack: Eye-tracking. • Matlab-based packages – BreatheEasyEDA: EDA. – EDA: EDA. – unfold: EEG.

6.2. What software for physiological signal processing 105 NeuroKit2, Release 0.0.39

6.3 Additional Resources

Hint: Would like to add something? Join us by contributing (see this tutorial).

6.3.1 General Neuroimaging

• Seven quick tips for analysis scripts in neuroimaging (van Vliet, 2020). • Neuroimaging tutorials and resources

6.3.2 ECG

• Improving Data Quality: ECG • Understanding ECG waves • Analysing a Noisy ECG Signal • All About HRV Part 4: Respiratory Sinus Arrhythmia

6.3.3 EDA

• Improving Data Quality: EDA • All About EDA Part 1: Introduction to Electrodermal Activity • All About EDA Part 2: Components of Skin Conductance • Skin Conductance Response – What it is and How to Measure it

6.3.4 EEG

• Electroencephalogram (EEG) Recording Protocol (Farrens et al., 2019) • Compute the average bandpower of an EEG signal

106 Chapter 6. Resources CHAPTER SEVEN

FUNCTIONS

7.1 ECG

Submodule for NeuroKit. ecg_analyze(data, sampling_rate=1000, method='auto') Performs ECG analysis on either epochs (event-related analysis) or on longer periods of data such as resting- state data. Parameters • data (Union[dict, pd.DataFrame]) – A dictionary of epochs, containing one DataFrame per epoch, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually obtained via epochs_to_df(). Can also take a DataFrame of processed signals from a longer period of data, typically generated by ecg_process() or bio_process(). Can also take a dict containing sets of separate periods of data. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Defaults to 1000Hz. • method (str) – Can be one of ‘event-related’ for event-related analysis on epochs, or ‘interval-related’ for analysis on longer periods of data. Defaults to ‘auto’ where the right method will be chosen based on the mean duration of the data (‘event-related’ for duration under 10s). Returns DataFrame – A dataframe containing the analyzed ECG features. If event-related anal- ysis is conducted, each epoch is indicated by the Label column. See ecg_eventrelated() and ecg_intervalrelated() docstrings for details. See also: bio_process(), ecg_process(), epochs_create(), ecg_eventrelated(), ecg_intervalrelated()

Examples

>>> import neurokit2 as nk >>> >>> # Example 1: Download the data for event-related analysis >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data for event-related analysis >>> df, info= nk.bio_process(ecg=data["ECG"], sampling_rate=100) >>> events= nk.events_find(data["Photosensor"], threshold_keep='below', (continues on next page)

107 NeuroKit2, Release 0.0.39

(continued from previous page) ... event_conditions=["Negative","Neutral", ... "Neutral","Negative"]) >>> epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-0.1,

˓→epochs_end=1.9) >>> >>> # Analyze >>> nk.ecg_analyze(epochs, sampling_rate=100) Label Condition ... ECG_Phase_Completion_Ventricular ECG_Quality_Mean 1 1 Negative ...... 2 2 Neutral ...... 3 3 Neutral ...... 4 4 Negative ......

[4 rows x 17 columns] >>> >>> # Example 2: Download the resting-state data >>> data = nk.data(“bio_resting_5min_100hz”) >>> >>> # Process the data >>> df, info = nk.ecg_process(data[“ECG”], sampling_rate=100) >>> >>> # Analyze >>> nk.ecg_analyze(df, sampling_rate=100) #doctest: +ELLIPSIS ECG_Rate_Mean HRV_RMSSD . . . 0 . . . [1 rows x 37 columns] ecg_clean(ecg_signal, sampling_rate=1000, method='neurokit') Clean an ECG signal. Prepare a raw ECG signal for R-peak detection with the specified method. Parameters • ecg_signal (Union[list, np.array, pd.Series]) – The raw ECG channel. • sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to 1000. • method (str) – The processing pipeline to apply. Can be one of ‘neurokit’ (default), ‘biosppy’, ‘pamtompkins1985’, ‘hamilton2002’, ‘elgendi2010’, ‘engzeemod2012’. Returns array – Vector containing the cleaned ECG signal. See also: ecg_findpeaks(), signal_rate(), ecg_process(), ecg_plot()

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> import matplotlib.pyplot as plt >>> >>> ecg= nk.ecg_simulate(duration=10, sampling_rate=1000) >>> signals= pd.DataFrame({"ECG_Raw": ecg, ... "ECG_NeuroKit": nk.ecg_clean(ecg, sampling_rate=1000,

˓→ method="neurokit"), ... "ECG_BioSPPy": nk.ecg_clean(ecg, sampling_rate=1000,

˓→method="biosppy"), ... "ECG_PanTompkins": nk.ecg_clean(ecg, sampling_

˓→rate=1000, method="pantompkins1985"), ... "ECG_Hamilton": nk.ecg_clean(ecg, sampling_rate=1000,

˓→ method="hamilton2002"), (continues on next page)

108 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) ... "ECG_Elgendi": nk.ecg_clean(ecg, sampling_rate=1000,

˓→method="elgendi2010"), ... "ECG_EngZeeMod": nk.ecg_clean(ecg, sampling_

˓→rate=1000, method="engzeemod2012")}) >>> signals.plot()

References

• Jiapu Pan and Willis J. Tompkins. A Real-Time QRS Detection Algorithm. In: IEEE Transactions on Biomedical Engineering BME-32.3 (1985), pp. 230–236. • Hamilton, Open Source ECG Analysis Software Documentation, E.P.Limited, 2002. ecg_delineate(ecg_cleaned, rpeaks=None, sampling_rate=1000, method='peak', show=False, show_type='peaks', check=False) Delineate QRS complex. Function to delineate the QRS complex. • Cardiac Cycle: A typical ECG heartbeat consists of a P wave, a QRS complex and a T wave. The P wave represents the wave of depolarization that spreads from the SA-node throughout the atria. The QRS complex reflects the rapid depolarization of the right and left ventricles. Since the ventricles are the largest part of the heart, in terms of mass, the QRS complex usually has a much larger amplitude than the P-wave. The T wave represents the ventricular repolarization of the ventricles.On rare occasions, a U wave can be seen following the T wave. The U wave is believed to be related to the last remnants of ventricular repolarization.

Parameters • ecg_cleaned (Union[list, np.array, pd.Series]) – The cleaned ECG channel as returned by ecg_clean(). • rpeaks (Union[list, np.array, pd.Series]) – The samples at which R-peaks occur. Acces- sible with the key “ECG_R_Peaks” in the info dictionary returned by ecg_findpeaks(). • sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to 500. • method (str) – Can be one of ‘peak’ (default) for a peak-based method, ‘cwt’ for contin- uous wavelet transform or ‘dwt’ for discrete wavelet transform. • show (bool) – If True, will return a plot to visualizing the delineated waves information. • show_type (str) – The type of delineated waves information showed in the plot. • check (bool) – Defaults to False. Returns • waves (dict) – A dictionary containing additional information. For derivative method, the dictionary contains the samples at which P-peaks, Q-peaks, S-peaks, T-peaks, P- onsets and T-offsets occur, accessible with the key “ECG_P_Peaks”, “ECG_Q_Peaks”, “ECG_S_Peaks”, “ECG_T_Peaks”, “ECG_P_Onsets”, “ECG_T_Offsets” respectively. For wavelet methods, the dictionary contains the samples at which P-peaks, T-peaks, P-onsets, P-offsets, T-onsets, T-offsets, QRS-onsets and QRS-offsets occur, accessible with the key “ECG_P_Peaks”, “ECG_T_Peaks”, “ECG_P_Onsets”, “ECG_P_Offsets”,

7.1. ECG 109 NeuroKit2, Release 0.0.39

“ECG_T_Onsets”, “ECG_T_Offsets”, “ECG_R_Onsets”, “ECG_R_Offsets” respec- tively. • signals (DataFrame) – A DataFrame of same length as the input signal in which oc- curences of peaks, onsets and offsets marked as “1” in a list of zeros.

See also: ecg_clean(), signal_fixpeaks(), ecg_peaks(), signal_rate(), ecg_process(), ecg_plot()

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=10, sampling_rate=1000) >>> cleaned= nk.ecg_clean(ecg, sampling_rate=1000) >>> _, rpeaks= nk.ecg_peaks(cleaned) >>> signals, waves= nk.ecg_delineate(cleaned, rpeaks, sampling_rate=1000, method=

˓→"peak") >>> nk.events_plot(waves["ECG_P_Peaks"], cleaned)

>>> nk.events_plot(waves["ECG_T_Peaks"], cleaned)

References

• Martínez, J. P., Almeida, R., Olmos, S., Rocha, A. P., & Laguna, P. (2004). A wavelet-based ECG delineator: evaluation on standard databases. IEEE Transactions on biomedical engineering, 51(4), 570- 581. ecg_eventrelated(epochs, silent=False) Performs event-related ECG analysis on epochs. Parameters • epochs (Union[dict, pd.DataFrame]) – A dict containing one DataFrame per event/trial, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually ob- tained via epochs_to_df(). • silent (bool) – If True, silence possible warnings. Returns DataFrame – A dataframe containing the analyzed ECG features for each epoch, with each epoch indicated by the Label column (if not present, by the Index column). The analyzed features consist of the following: • ”ECG_Rate_Max”: the maximum heart rate after stimulus onset. • ”ECG_Rate_Min”: the minimum heart rate after stimulus onset. • ”ECG_Rate_Mean”: the mean heart rate after stimulus onset. • ”ECG_Rate_Max_Time”: the time at which maximum heart rate occurs. • ”ECG_Rate_Min_Time”: the time at which minimum heart rate occurs.

110 Chapter 7. Functions NeuroKit2, Release 0.0.39

• ”ECG_Phase_Atrial”: indication of whether the onset of the event concurs with respira- tory systole (1) or diastole (0). • ”ECG_Phase_Ventricular”: indication of whether the onset of the event concurs with respiratory systole (1) or diastole (0). • ”ECG_Phase_Atrial_Completion”: indication of the stage of the current cardiac (atrial) phase (0 to 1) at the onset of the event. • ”ECG_Phase_Ventricular_Completion”: indication of the stage of the current cardiac (ventricular) phase (0 to 1) at the onset of the event. We also include the following experimental features related to the parameters of a quadratic model: • ”ECG_Rate_Trend_Linear”: The parameter corresponding to the linear trend. • ”ECG_Rate_Trend_Quadratic”: The parameter corresponding to the curvature. • ”ECG_Rate_Trend_R2”: the quality of the quadratic model. If too low, the parameters might not be reliable or meaningful. See also: events_find(), epochs_create(), bio_process()

Examples

>>> import neurokit2 as nk >>> >>> # Example with simulated data >>> ecg, info= nk.ecg_process(nk.ecg_simulate(duration=20)) >>> >>> # Process the data >>> epochs= nk.epochs_create(ecg, events=[5000, 10000, 15000], ... epochs_start=-0.1, epochs_end=1.9) >>> nk.ecg_eventrelated(epochs) Label Event_Onset ... ECG_Phase_Completion_Ventricular ECG_Quality_Mean 1 1 ...... 2 2 ...... 3 3 ......

[3 rows x 16 columns] >>> >>> # Example with real data >>> data = nk.data(“bio_eventrelated_100hz”) >>> >>> # Process the data >>> df, info = nk.bio_process(ecg=data[“ECG”], sampling_rate=100) >>> events = nk.events_find(data[“Photosensor”], . . . threshold_keep=’below’, . . . event_conditions=[“Negative”, “Neutral”, . . . “Neutral”, “Negative”]) >>> epochs = nk.epochs_create(df, events, sampling_rate=100, . . . epochs_start=-0.1, epochs_end=1.9) >>> nk.ecg_eventrelated(epochs) #doctest: +ELLIPSIS Label Condition . . . ECG_Phase_Completion_Ventricular ECG_Quality_Mean 1 1 Negative ...... 2 2 Neutral ...... 3 3 Neutral ...... 4 4 Negative ...... [4 rows x 17 columns] ecg_findpeaks(ecg_cleaned, sampling_rate=1000, method='neurokit', show=False) Find R-peaks in an ECG signal. Low-level function used by ecg_peaks() to identify R-peaks in an ECG signal using a different set of algorithms. See ecg_peaks() for details. Parameters

7.1. ECG 111 NeuroKit2, Release 0.0.39

• ecg_cleaned (Union[list, np.array, pd.Series]) – The cleaned ECG channel as returned by ecg_clean(). • sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to 1000. • method (string) – The algorithm to be used for R-peak detection. Can be one of ‘neu- rokit’ (default), ‘pamtompkins1985’, ‘hamilton2002’, ‘christov2004’, ‘gamboa2008’, ‘elgendi2010’, ‘engzeemod2012’, ‘kalidas2017’, ‘martinez2003’, ‘rodrigues2020’ or ‘promac’. • show (bool) – If True, will return a plot to visualizing the thresholds used in the algorithm. Useful for debugging. Returns info (dict) – A dictionary containing additional information, in this case the samples at which R-peaks occur, accessible with the key “ECG_R_Peaks”. See also: ecg_clean(), signal_fixpeaks(), ecg_peaks(), ecg_rate(), ecg_process(), ecg_plot()

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=10, sampling_rate=1000) >>> cleaned= nk.ecg_clean(ecg, sampling_rate=1000) >>> info= nk.ecg_findpeaks(cleaned) >>> nk.events_plot(info["ECG_R_Peaks"], cleaned)

>>> >>> # Different methods >>> neurokit= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="neurokit"), method=

˓→"neurokit") >>> pantompkins1985= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="pantompkins1985

˓→"), method="pantompkins1985") >>> hamilton2002= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="hamilton2002"),

˓→method="hamilton2002") >>> martinez2003= nk.ecg_findpeaks(cleaned, method="martinez2003") >>> christov2004= nk.ecg_findpeaks(cleaned, method="christov2004") >>> gamboa2008= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="gamboa2008"), method=

˓→"gamboa2008") >>> elgendi2010= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="elgendi2010"),

˓→method="elgendi2010") >>> engzeemod2012= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="engzeemod2012"),

˓→method="engzeemod2012") >>> kalidas2017= nk.ecg_findpeaks(nk.ecg_clean(ecg, method="kalidas2017"),

˓→method="kalidas2017") >>> rodrigues2020= nk.ecg_findpeaks(cleaned, method="rodrigues2020") >>> >>> # Visualize >>> nk.events_plot([neurokit["ECG_R_Peaks"], ... pantompkins1985["ECG_R_Peaks"], ... hamilton2002["ECG_R_Peaks"], ... christov2004["ECG_R_Peaks"], ... gamboa2008["ECG_R_Peaks"], (continues on next page)

112 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) ... elgendi2010["ECG_R_Peaks"], ... engzeemod2012["ECG_R_Peaks"], ... kalidas2017["ECG_R_Peaks"], ... martinez2003["ECG_R_Peaks"], ... rodrigues2020["ECG_R_Peaks"]], cleaned)

>>> >>> # Method-agreement >>> ecg= nk.ecg_simulate(duration=10, sampling_rate=500) >>> ecg= nk.signal_distort(ecg, ... sampling_rate=500, ... noise_amplitude=0.2, noise_frequency=[25, 50], ... artifacts_amplitude=0.2, artifacts_frequency=50) >>> nk.ecg_findpeaks(ecg, sampling_rate=1000, method="promac", show=True) {'ECG_R_Peaks': array(...)}

References

• Gamboa, H. (2008). Multi-modal behavioral biometrics based on hci and electrophysiology. PhD The- sisUniversidade. • Zong, W., Heldt, T., Moody, G. B., & Mark, R. G. (2003, September). An open-source algorithm to detect onset of arterial blood pressure pulses. In Computers in Cardiology, 2003 (pp. 259-262). IEEE. • Hamilton, Open Source ECG Analysis Software Documentation, E.P.Limited, 2002. • Pan, J., & Tompkins, W. J. (1985). A real-time QRS detection algorithm. IEEE transactions on biomedical engineering, (3), 230-236. • Engelse, W. A. H., & Zeelenberg, C. (1979). A single scan algorithm for QRS detection and feature extraction IEEE Comput Cardiol. Long Beach: IEEE Computer Society. • Lourenço, A., Silva, H., Leite, P., Lourenço, R., & Fred, A. L. (2012, February). Real Time Electrocar- diogram Segmentation for Finger based ECG Biometrics. In Biosignals (pp. 49-54). ecg_intervalrelated(data, sampling_rate=1000) Performs ECG analysis on longer periods of data (typically > 10 seconds), such as resting-state data. Parameters • data (Union[dict, pd.DataFrame]) – A DataFrame containing the different processed signal(s) as different columns, typically generated by ecg_process() or bio_process(). Can also take a dict containing sets of separately processed DataFrames. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Returns DataFrame – A dataframe containing the analyzed ECG features. The analyzed features con- sist of the following: • ”ECG_Rate_Mean”: the mean heart rate. • ”ECG_HRV”: the different heart rate variability metrices. See hrv_summary() docstrings for details. See also: bio_process(), ecg_eventrelated()

7.1. ECG 113 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Process the data >>> df, info= nk.ecg_process(data["ECG"], sampling_rate=100) >>> >>> # Single dataframe is passed >>> nk.ecg_intervalrelated(df, sampling_rate=100) ECG_Rate_Mean HRV_RMSSD ... 0 ...

[1 rows x 55 columns] >>> >>> epochs = nk.epochs_create(df, events=[0, 15000], sampling_rate=100, . . . epochs_end=150) >>> nk.ecg_intervalrelated(epochs) #doctest: +ELLIPSIS ECG_Rate_Mean HRV_RMSSD . . . 1 . . . [2 rows x 55 columns] ecg_peaks(ecg_cleaned, sampling_rate=1000, method='neurokit', correct_artifacts=False) Find R-peaks in an ECG signal. Find R-peaks in an ECG signal using the specified method. Parameters • ecg_cleaned (Union[list, np.array, pd.Series]) – The cleaned ECG channel as returned by ecg_clean(). • sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to 1000. • method (string) – The algorithm to be used for R-peak detection. Can be one of ‘neu- rokit’ (default), ‘pamtompkins1985’, ‘hamilton2002’, ‘christov2004’, ‘gamboa2008’, ‘elgendi2010’, ‘engzeemod2012’ or ‘kalidas2017’. • correct_artifacts (bool) – Whether or not to identify artifacts as defined by Jukka A. Lipponen & Mika P. Tarvainen (2019): A robust algorithm for heart rate variability time series artefact correction using novel beat classification, Journal of Medical Engineering & Technology, DOI: 10.1080/03091902.2019.1640306. Returns • signals (DataFrame) – A DataFrame of same length as the input signal in which oc- curences of R-peaks marked as “1” in a list of zeros with the same length as ecg_cleaned. Accessible with the keys “ECG_R_Peaks”. • info (dict) – A dictionary containing additional information, in this case the samples at which R-peaks occur, accessible with the key “ECG_R_Peaks”. See also: ecg_clean(), ecg_findpeaks(), ecg_process(), ecg_plot(), signal_rate(), signal_fixpeaks()

114 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=10, sampling_rate=1000) >>> cleaned= nk.ecg_clean(ecg, sampling_rate=1000) >>> signals, info= nk.ecg_peaks(cleaned, correct_artifacts=True) >>> nk.events_plot(info["ECG_R_Peaks"], cleaned)

References

• Gamboa, H. (2008). Multi-modal behavioral biometrics based on hci and electrophysiology. PhD The- sisUniversidade. • W. Zong, T. Heldt, G.B. Moody, and R.G. Mark. An open-source algorithm to detect onset of arterial blood pressure pulses. In Computers in Cardiology, 2003, pages 259–262, 2003. • Hamilton, Open Source ECG Analysis Software Documentation, E.P.Limited, 2002. • Jiapu Pan and Willis J. Tompkins. A Real-Time QRS Detection Algorithm. In: IEEE Transactions on Biomedical Engineering BME-32.3 (1985), pp. 230–236. • C. Zeelenberg, A single scan algorithm for QRS detection and feature extraction, IEEE Comp. in Cardi- ology, vol. 6, pp. 37-42, 1979 • A. Lourenco, H. Silva, P. Leite, R. Lourenco and A. Fred, “Real Time Electrocardiogram Segmentation for Finger Based ECG Biometrics”, BIOSIGNALS 2012, pp. 49-54, 2012. ecg_phase(ecg_cleaned, rpeaks=None, delineate_info=None, sampling_rate=None) Compute cardiac phase (for both atrial and ventricular). Finds the cardiac phase, labelled as 1 for systole and 0 for diastole. Parameters • ecg_cleaned (Union[list, np.array, pd.Series]) – The cleaned ECG channel as returned by ecg_clean(). • rpeaks (list or array or DataFrame or Series or dict) – The samples at which the different ECG peaks occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with ecg_findpeaks() or ecg_peaks(). • delineate_info (dict) – A dictionary containing additional information of ecg delineation and can be obtained with ecg_delineate(). • sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to None. Returns signals (DataFrame) – A DataFrame of same length as ecg_signal containing the following columns: • ”ECG_Phase_Atrial”: cardiac phase, marked by “1” for systole and “0” for diastole. • ”ECG_Phase_Completion_Atrial”: cardiac phase (atrial) completion, expressed in per- centage (from 0 to 1), representing the stage of the current cardiac phase. • ”ECG_Phase_Ventricular”: cardiac phase, marked by “1” for systole and “0” for dias- tole.

7.1. ECG 115 NeuroKit2, Release 0.0.39

• ”ECG_Phase_Completion_Ventricular”: cardiac phase (ventricular) completion, ex- pressed in percentage (from 0 to 1), representing the stage of the current cardiac phase. See also: ecg_clean(), ecg_peaks(), ecg_process(), ecg_plot()

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=10, sampling_rate=1000) >>> cleaned= nk.ecg_clean(ecg, sampling_rate=1000) >>> _, rpeaks= nk.ecg_peaks(cleaned) >>> signals, waves= nk.ecg_delineate(cleaned, rpeaks, sampling_rate=1000) >>> >>> cardiac_phase= nk.ecg_phase(ecg_cleaned=cleaned, rpeaks=rpeaks, ... delineate_info=waves, sampling_rate=1000) >>> nk.signal_plot([cleaned, cardiac_phase], standardize=True) ecg_plot(ecg_signals, rpeaks=None, sampling_rate=None, show_type='default') Visualize ECG data. Parameters • ecg_signals (DataFrame) – DataFrame obtained from ecg_process(). • rpeaks (dict) – The samples at which the R-peak occur. Dict returned by ecg_process(). Defaults to None. • sampling_rate (int) – The sampling frequency of the ECG (in Hz, i.e., samples/second). Needs to be supplied if the data should be plotted over time in seconds. Otherwise the data is plotted over samples. Defaults to None. Must be specified to plot artifacts. • show_type (str) – Visualize the ECG data with ‘default’ or visualize artifacts thresholds with ‘artifacts’ produced by ecg_fixpeaks(), or ‘full’ to visualize both. Returns fig – Figure representing a plot of the processed ecg signals (and peak artifacts).

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=15, sampling_rate=1000, heart_rate=80) >>> signals, info= nk.ecg_process(ecg, sampling_rate=1000) >>> nk.ecg_plot(signals, sampling_rate=1000, show_type='default')

See also: ecg_process() ecg_process(ecg_signal, sampling_rate=1000, method='neurokit') Process an ECG signal. Convenience function that automatically processes an ECG signal. Parameters • ecg_signal (Union[list, np.array, pd.Series]) – The raw ECG channel.

116 Chapter 7. Functions NeuroKit2, Release 0.0.39

• sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to 1000. • method (str) – The processing pipeline to apply. Defaults to “neurokit”. Returns • signals (DataFrame) – A DataFrame of the same length as the ecg_signal containing the following columns: – ”ECG_Raw”: the raw signal. – ”ECG_Clean”: the cleaned signal. – ”ECG_R_Peaks”: the R-peaks marked as “1” in a list of zeros. – ”ECG_Rate”: heart rate interpolated between R-peaks. – ”ECG_P_Peaks”: the P-peaks marked as “1” in a list of zeros – ”ECG_Q_Peaks”: the Q-peaks marked as “1” in a list of zeros . – ”ECG_S_Peaks”: the S-peaks marked as “1” in a list of zeros. – ”ECG_T_Peaks”: the T-peaks marked as “1” in a list of zeros. – ”ECG_P_Onsets”: the P-onsets marked as “1” in a list of zeros. – ”ECG_P_Offsets”: the P-offsets marked as “1” in a list of zeros (only when method in ecg_delineate is wavelet). – ”ECG_T_Onsets”: the T-onsets marked as “1” in a list of zeros (only when method in ecg_delineate is wavelet). – ”ECG_T_Offsets”: the T-offsets marked as “1” in a list of zeros. – ”ECG_R_Onsets”: the R-onsets marked as “1” in a list of zeros (only when method in ecg_delineate is wavelet). – ”ECG_R_Offsets”: the R-offsets marked as “1” in a list of zeros (only when method in ecg_delineate is wavelet). – ”ECG_Phase_Atrial”: cardiac phase, marked by “1” for systole and “0” for diastole. – ”ECG_Phase_Ventricular”: cardiac phase, marked by “1” for systole and “0” for diastole. – ”ECG_Atrial_PhaseCompletion”: cardiac phase (atrial) completion, expressed in percentage (from 0 to 1), representing the stage of the current cardiac phase. – ”ECG_Ventricular_PhaseCompletion”: cardiac phase (ventricular) completion, ex- pressed in percentage (from 0 to 1), representing the stage of the current cardiac phase. • info (dict) – A dictionary containing the samples at which the R-peaks occur, accessible with the key “ECG_Peaks”. See also: ecg_clean(), ecg_findpeaks(), ecg_plot(), signal_rate(), signal_fixpeaks()

7.1. ECG 117 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=15, sampling_rate=1000, heart_rate=80) >>> signals, info= nk.ecg_process(ecg, sampling_rate=1000) >>> nk.ecg_plot(signals)

ecg_quality(ecg_cleaned, rpeaks=None, sampling_rate=1000) Quality of ECG Signal. Compute a continuous index of quality of the ECG signal, by interpolating the distance of each QRS segment from the average QRS segment present in the data. This index is therefore relative, and 1 corresponds to heartbeats that are the closest to the average sample and 0 corresponds to the most distance heartbeat, from that average sample. Returns array – Vector containing the quality index ranging from 0 to 1. See also: ecg_segment(), ecg_delineate()

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=30, sampling_rate=300, noise=0.2) >>> ecg_cleaned= nk.ecg_clean(ecg, sampling_rate=300) >>> quality= nk.ecg_quality(ecg_cleaned, sampling_rate=300) >>> >>> nk.signal_plot([ecg_cleaned, quality], standardize=True) ecg_rate(peaks, sampling_rate=1000, desired_length=None, interpolation_method='monotone_cubic') Calculate signal rate from a series of peaks. This function can also be called either via ecg_rate(), `ppg_rate() or rsp_rate() (aliases provided for consistency). Parameters • peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with sig- nal_findpeaks(). If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., ecg_findpeaks() or rsp_findpeaks(). • sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000. • desired_length (int) – If left at the default None, the returned rated will have the same number of elements as peaks. If set to a value larger than the sample at which the last peak occurs in the signal (i.e., peaks[-1]), the returned rate will be interpolated between peaks over desired_length samples. To interpolate the rate over the entire duration of the signal, set desired_length to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to None. • interpolation_method (str) – Method used to interpolate the rate between peaks. See signal_interpolate(). ‘monotone_cubic’ is chosen as the default interpolation method

118 Chapter 7. Functions NeuroKit2, Release 0.0.39

since it ensures monotone interpolation between data points (i.e., it prevents physiolog- ically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity. Returns array – A vector containing the rate. See also: signal_findpeaks(), signal_fixpeaks(), signal_plot()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1) >>> info= nk.signal_findpeaks(signal) >>> >>> rate= nk.signal_rate(peaks=info["Peaks"], desired_length=len(signal)) >>> fig= nk.signal_plot(rate) >>> fig ecg_rsa(ecg_signals, rsp_signals=None, rpeaks=None, sampling_rate=1000, continuous=False) Respiratory Sinus Arrhythmia (RSA) Respiratory sinus arrhythmia (RSA), also referred to as ‘cardiac coherence’, is the naturally occurring variation in heart rate during the breathing cycle. Metrics to quantify it are often used as a measure of parasympathetic nervous system activity. Neurophysiology informs us that the functional output of the myelinated vagus origi- nating from the nucleus ambiguus has a respiratory rhythm. Thus, there would a temporal relation between the respiratory rhythm being expressed in the firing of these efferent pathways and the functional effect on the heart rate rhythm manifested as RSA. Importantly, several methods exist to quantify RSA: • The Peak-to-trough (P2T) algorithm measures the statistical range in milliseconds of the heart period oscillation associated with synchronous respiration. Operationally, subtracting the shortest heart period during inspiration from the longest heart period during a breath cycle produces an estimate of RSA during each breath. The peak-to-trough method makes no statistical assumption or correction (e.g., adaptive filtering) regarding other sources of variance in the heart period time series that may confound, distort, or interact with the metric such as slower periodicities and baseline trend. Although it has been proposed that the P2T method “acts as a time-domain filter dynamically centered at the exact ongoing respiratory frequency” (Grossman, 1992), the method does not transform the time series in any way, as a filtering process would. Instead the method uses knowledge of the ongoing respiratory cycle to associate segments of the heart period time series with either inhalation or exhalation (Lewis, 2012). • The Porges-Bohrer (PB) algorithm assumes that heart period time series reflect the sum of several component time series. Each of these component time series may be mediated by different neural mechanisms and may have different statistical features. The Porges-Bohrer method applies an algorithm that selectively extracts RSA, even when the periodic process representing RSA is superimposed on a complex baseline that may include aperiodic and slow periodic processes. Since the method is designed to remove sources of variance in the heart period time series other than the variance within the frequency band of spontaneous breathing, the method is capable of accurately quantifying RSA when the signal to noise ratio is low. Parameters • ecg_signals (DataFrame) – DataFrame obtained from ecg_process(). Should contain columns ECG_Rate and ECG_R_Peaks. Can also take a DataFrame comprising of both ECG and RSP signals, generated by bio_process().

7.1. ECG 119 NeuroKit2, Release 0.0.39

• rsp_signals (DataFrame) – DataFrame obtained from rsp_process(). Should contain columns RSP_Phase and RSP_PhaseCompletion. No impact when a DataFrame com- prising of both the ECG and RSP signals are passed as ecg_signals. Defaults to None. • rpeaks (dict) – The samples at which the R-peaks of the ECG signal occur. Dict returned by ecg_peaks(), ecg_process(), or bio_process(). Defaults to None. • sampling_rate (int) – The sampling frequency of signals (in Hz, i.e., samples/second). • continuous (bool) – If False, will return RSA properties computed from the data (one value per index). If True, will return continuous estimations of RSA of the same length as the signal. See below for more details. Returns rsa (dict) – A dictionary containing the RSA features, which includes: •” RSA_P2T_Values”: the estimate of RSA during each breath cycle, produced by subtract- ing the shortest heart period (or RR interval) from the longest heart period in ms. •” RSA_P2T_Mean”: the mean peak-to-trough across all cycles in ms •” RSA_P2T_Mean_log”: the logarithm of the mean of RSA estimates. •” RSA_P2T_SD”: the standard deviation of all RSA estimates. •” RSA_P2T_NoRSA”: the number of breath cycles from which RSA could not be calcu- lated. •” RSA_PorgesBohrer”: the Porges-Bohrer estimate of RSA, optimal when the signal to noise ratio is low, in ln(ms^2).

Example

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data >>> ecg_signals, info= nk.ecg_process(data["ECG"], sampling_rate=100) >>> rsp_signals,_= nk.rsp_process(data["RSP"], sampling_rate=100) >>> >>> # Get RSA features >>> nk.ecg_rsa(ecg_signals, rsp_signals, info, sampling_rate=100,

˓→continuous=False) {'RSA_P2T_Mean': ..., 'RSA_P2T_Mean_log': ..., 'RSA_P2T_SD': ..., 'RSA_P2T_NoRSA': ..., 'RSA_PorgesBohrer': ...} >>> >>> # Get RSA as a continuous signal >>> rsa= nk.ecg_rsa(ecg_signals, rsp_signals, info, sampling_rate=100,

˓→continuous=True) >>> rsa RSA_P2T 0 0.09 1 0.09 (continues on next page)

120 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) 2 0.09 ......

[15000 rows x 1 columns] >>> nk.signal_plot([ecg_signals[“ECG_Rate”], rsp_signals[“RSP_Rate”], rsa], stan- dardize=True)

References

• Servant, D., Logier, R., Mouster, Y., & Goudemand, M. (2009). La variabilité de la fréquence cardiaque. Intérêts en psychiatrie. L’Encéphale, 35(5), 423–428. doi:10.1016/j.encep.2008.06.016 • Lewis, G. F., Furman, S. A., McCool, M. F., & Porges, S. W. (2012). Statistical strategies to quantify respiratory sinus arrhythmia: Are commonly used metrics equivalent?. Biological psychology, 89(2), 349-364. • Zohar, A. H., Cloninger, C. R., & McCraty, R. (2013). Personality and heart rate variability: exploring pathways from personality to cardiac coherence and health. Open Journal of Social Sciences, 1(06), 32. ecg_rsp(ecg_rate, sampling_rate=1000, method='vangent2019') Extract ECG Derived Respiration (EDR). This implementation is far from being complete, as the information in the related papers prevents me from getting a full understanding of the procedure. Help is required! Parameters • ecg_rate (array) – The heart rate signal as obtained via ecg_rate(). • sampling_rate (int) – The sampling frequency of the signal that contains the R-peaks (in Hz, i.e., samples/second). Defaults to 1000Hz. • method (str) – Can be one of ‘vangent2019’ (default), ‘soni2019’, ‘charlton2016’ or ‘sarkar2015’. Returns array – A Numpy array containing the heart rate.

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> # Get heart rate >>> data= nk.data("bio_eventrelated_100hz") >>> rpeaks, info= nk.ecg_peaks(data["ECG"], sampling_rate=100) >>> ecg_rate= nk.signal_rate(rpeaks, sampling_rate=100, desired_

˓→length=len(rpeaks)) >>> >>> >>> # Get ECG Derived Respiration (EDR) >>> edr= nk.ecg_rsp(ecg_rate, sampling_rate=100) >>> nk.standardize(pd.DataFrame({"EDR": edr,"RSP": data["RSP"]})).plot() >>> >>> # Method comparison (the closer to 0 the better) >>> nk.standardize(pd.DataFrame({"True RSP": data["RSP"], ... "vangent2019": nk.ecg_rsp(ecg_rate, sampling_

˓→rate=100, method="vangent2019"), (continues on next page)

7.1. ECG 121 NeuroKit2, Release 0.0.39

(continued from previous page) ... "sarkar2015": nk.ecg_rsp(ecg_rate, sampling_

˓→rate=100, method="sarkar2015"), ... "charlton2016": nk.ecg_rsp(ecg_rate, sampling_

˓→rate=100, method="charlton2016"), ... "soni2019": nk.ecg_rsp(ecg_rate, sampling_

˓→rate=100, ... method="soni2019")})).

˓→plot()

References

• van Gent, P., Farah, H., van Nes, N., & van Arem, B. (2019). HeartPy: A novel heart rate algorithm for the analysis of noisy signals. Transportation research part F: traffic psychology and behaviour, 66, 368-378. • Sarkar, S., Bhattacherjee, S., & Pal, S. (2015). Extraction of respiration signal from ECG for respiratory rate estimation. • Charlton, P. H., Bonnici, T., Tarassenko, L., Clifton, D. A., Beale, R., & Watkinson, P. J. (2016). An as- sessment of algorithms to estimate respiratory rate from the electrocardiogram and photoplethysmogram. Physiological measurement, 37(4), 610. • Soni, R., & Muniyandi, M. (2019). Breath rate variability: a novel measure to study the meditation effects. International Journal of Yoga, 12(1), 45. ecg_segment(ecg_cleaned, rpeaks=None, sampling_rate=1000, show=False) Segment an ECG signal into single heartbeats. Parameters • ecg_cleaned (Union[list, np.array, pd.Series]) – The cleaned ECG channel as returned by ecg_clean(). • rpeaks (dict) – The samples at which the R-peaks occur. Dict returned by ecg_peaks(). Defaults to None. • sampling_rate (int) – The sampling frequency of ecg_signal (in Hz, i.e., sam- ples/second). Defaults to 1000. • show (bool) – If True, will return a plot of heartbeats. Defaults to False. Returns dict – A dict containing DataFrames for all segmented heartbeats. See also: ecg_clean(), ecg_plot()

Examples

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=15, sampling_rate=1000, heart_rate=80) >>> ecg_cleaned= nk.ecg_clean(ecg, sampling_rate=1000) >>> nk.ecg_segment(ecg_cleaned, rpeaks=None, sampling_rate=1000, show=True) {'1': Signal Index Label ... '2': Signal Index Label (continues on next page)

122 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) ... '19': Signal Index Label ...} ecg_simulate(duration=10, length=None, sampling_rate=1000, noise=0.01, heart_rate=70, method='ecgsyn', random_state=None) Simulate an ECG/EKG signal. Generate an artificial (synthetic) ECG signal of a given duration and sampling rate using either the ECGSYN dynamical model (McSharry et al., 2003) or a simpler model based on Daubechies wavelets to roughly approx- imate cardiac cycles. Parameters • duration (int) – Desired recording length in seconds. • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). • length (int) – The desired length of the signal (in samples). • noise (float) – Noise level (amplitude of the laplace noise). • heart_rate (int) – Desired simulated heart rate (in beats per minute). • method (str) – The model used to generate the signal. Can be ‘simple’ for a simula- tion based on Daubechies wavelets that roughly approximates a single cardiac cycle. If ‘ecgsyn’ (default), will use an advanced model desbribed McSharry et al. (2003). • random_state (int) – Seed for the random number generator. Returns array – Vector containing the ECG signal.

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> ecg1= nk.ecg_simulate(duration=10, method="simple") >>> ecg2= nk.ecg_simulate(duration=10, method="ecgsyn") >>> pd.DataFrame({"ECG_Simple": ecg1, ... "ECG_Complex": ecg2}).plot(subplots=True) array([, ], dtype=object)

See also: rsp_simulate(), eda_simulate(), ppg_simulate(), emg_simulate()

7.1. ECG 123 NeuroKit2, Release 0.0.39

References

• McSharry, P. E., Clifford, G. D., Tarassenko, L., & Smith, L. A. (2003). A dynamical model for

generating synthetic electrocardiogram signals. IEEE transactions on biomedical engineering, 50(3), 289-294. - https://github.com/diarmaidocualain/ecg_simulation

7.2 PPG

Submodule for NeuroKit. ppg_clean(ppg_signal, sampling_rate=1000, method='elgendi') Clean a photoplethysmogram (PPG) signal. Prepare a raw PPG signal for systolic peak detection. Parameters • ppg_signal (Union[list, np.array, pd.Series]) – The raw PPG channel. • sampling_rate (int) – The sampling frequency of the PPG (in Hz, i.e., samples/second). The default is 1000. • method (str) – The processing pipeline to apply. Can be one of “elgendi”. The default is “elgendi”. Returns clean (array) – A vector containing the cleaned PPG. See also: ppg_simulate(), ppg_findpeaks()

Examples

>>> import neurokit2 as nk >>> import matplotlib.pyplot as plt >>> >>> ppg= nk.ppg_simulate(heart_rate=75, duration=30) >>> ppg_clean= nk.ppg_clean(ppg) >>> >>> plt.plot(ppg, label="raw PPG") >>> plt.plot(ppg_clean, label="clean PPG") >>> plt.legend() ppg_findpeaks(ppg_cleaned, sampling_rate=1000, method='elgendi', show=False) Find systolic peaks in a photoplethysmogram (PPG) signal. Parameters • ppg_cleaned (Union[list, np.array, pd.Series]) – The cleaned PPG channel as returned by ppg_clean(). • sampling_rate (int) – The sampling frequency of the PPG (in Hz, i.e., samples/second). The default is 1000. • method (str) – The processing pipeline to apply. Can be one of “elgendi”. The default is “elgendi”.

124 Chapter 7. Functions NeuroKit2, Release 0.0.39

• show (bool) – If True, returns a plot of the thresholds used during peak detection. Useful for debugging. The default is False. Returns info (dict) – A dictionary containing additional information, in this case the samples at which systolic peaks occur, accessible with the key “PPG_Peaks”. See also: ppg_simulate(), ppg_clean()

Examples

>>> import neurokit2 as nk >>> import matplotlib.pyplot as plt >>> >>> ppg= nk.ppg_simulate(heart_rate=75, duration=30) >>> ppg_clean= nk.ppg_clean(ppg) >>> info= nk.ppg_findpeaks(ppg_clean) >>> peaks= info["PPG_Peaks"] >>> >>> plt.plot(ppg, label="raw PPG") >>> plt.plot(ppg_clean, label="clean PPG") >>> plt.scatter(peaks, ppg[peaks],c="r", label="systolic peaks") >>> plt.legend()

References

• Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in

Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585. ppg_plot(ppg_signals, sampling_rate=None) Visualize photoplethysmogram (PPG) data. Parameters • ppg_signals (DataFrame) – DataFrame obtained from ppg_process(). • sampling_rate (int) – The sampling frequency of the PPG (in Hz, i.e., samples/second). Needs to be supplied if the data should be plotted over time in seconds. Otherwise the data is plotted over samples. Defaults to None. Returns fig – Figure representing a plot of the processed PPG signals.

Examples

>>> import neurokit2 as nk >>> >>> # Simulate data >>> ppg= nk.ppg_simulate(duration=10, sampling_rate=1000, heart_rate=70) >>> >>> # Process signal >>> signals, info= nk.ppg_process(ppg, sampling_rate=1000) >>> >>> # Plot (continues on next page)

7.2. PPG 125 NeuroKit2, Release 0.0.39

(continued from previous page) >>> nk.ppg_plot(signals)

See also: ppg_process() ppg_process(ppg_signal, sampling_rate=1000, **kwargs) Process a photoplethysmogram (PPG) signal. Convenience function that automatically processes an electromyography signal. Parameters • ppg_signal (Union[list, np.array, pd.Series]) – The raw PPG channel. • sampling_rate (int) – The sampling frequency of emg_signal (in Hz, i.e., sam- ples/second). Returns • signals (DataFrame) – A DataFrame of same length as emg_signal containing the fol- lowing columns: - “PPG_Raw”: the raw signal. - “PPG_Clean”: the cleaned signal. - “PPG_Rate”: the heart rate as measured based on PPG peaks. - “PPG_Peaks”: the PPG peaks marked as “1” in a list of zeros. • info (dict) – A dictionary containing the information of peaks. See also: ppg_clean(), ppg_findpeaks()

Examples

>>> import neurokit2 as nk >>> >>> ppg= nk.ppg_simulate(duration=10, sampling_rate=1000, heart_rate=70) >>> signals, info= nk.ppg_process(ppg, sampling_rate=1000) >>> fig= nk.ppg_plot(signals) >>> fig ppg_rate(peaks, sampling_rate=1000, desired_length=None, interpolation_method='monotone_cubic') Calculate signal rate from a series of peaks. This function can also be called either via ecg_rate(), `ppg_rate() or rsp_rate() (aliases provided for consistency). Parameters • peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with sig- nal_findpeaks(). If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., ecg_findpeaks() or rsp_findpeaks(). • sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000. • desired_length (int) – If left at the default None, the returned rated will have the same number of elements as peaks. If set to a value larger than the sample at which the last

126 Chapter 7. Functions NeuroKit2, Release 0.0.39

peak occurs in the signal (i.e., peaks[-1]), the returned rate will be interpolated between peaks over desired_length samples. To interpolate the rate over the entire duration of the signal, set desired_length to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to None. • interpolation_method (str) – Method used to interpolate the rate between peaks. See signal_interpolate(). ‘monotone_cubic’ is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiolog- ically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity. Returns array – A vector containing the rate. See also: signal_findpeaks(), signal_fixpeaks(), signal_plot()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1) >>> info= nk.signal_findpeaks(signal) >>> >>> rate= nk.signal_rate(peaks=info["Peaks"], desired_length=len(signal)) >>> fig= nk.signal_plot(rate) >>> fig ppg_simulate(duration=120, sampling_rate=1000, heart_rate=70, frequency_modulation=0.3, ibi_randomness=0.1, drift=0, motion_amplitude=0.1, powerline_amplitude=0.01, burst_number=0, burst_amplitude=1, random_state=None, show=False) Simulate a photoplethysmogram (PPG) signal. Phenomenological approximation of PPG. The PPG wave is described with four landmarks: wave onset, location of the systolic peak, location of the dicrotic notch and location of the diastolic peaks. These landmarks are defined as x and y coordinates (in a time series). These coordinates are then interpolated at the desired sampling rate to obtain the PPG signal. Parameters • duration (int) – Desired recording length in seconds. The default is 120. • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). The de- fault is 1000. • heart_rate (int) – Desired simulated heart rate (in beats per minute). The default is 70. • frequency_modulation (float) – Float between 0 and 1. Determines how pronounced respiratory sinus arrythmia (RSA) is (0 corresponds to absence of RSA). The default is 0.3. • ibi_randomness (float) – Float between 0 and 1. Determines how much random noise there is in the duration of each PPG wave (0 corresponds to absence of variation). The default is 0.1. • drift (float) – Float between 0 and 1. Determines how pronounced the baseline drift (.05 Hz) is (0 corresponds to absence of baseline drift). The default is 1.

7.2. PPG 127 NeuroKit2, Release 0.0.39

• motion_amplitude (float) – Float between 0 and 1. Determines how pronounced the motion artifact (0.5 Hz) is (0 corresponds to absence of motion artifact). The default is 0.1. • powerline_amplitude (float) – Float between 0 and 1. Determines how pronounced the powerline artifact (50 Hz) is (0 corresponds to absence of powerline artifact). Note that powerline_amplitude > 0 is only possible if ‘sampling_rate’ is >= 500. The default is 0.1. • burst_amplitude (float) – Float between 0 and 1. Determines how pronounced high frequency burst artifacts are (0 corresponds to absence of bursts). The default is 1. • burst_number (int) – Determines how many high frequency burst artifacts occur. The default is 0. • show (bool) – If true, returns a plot of the landmarks and interpolated PPG. Useful for debugging. • random_state (int) – Seed for the random number generator. Keep it fixed for repro- ducible results. Returns ppg (array) – A vector containing the PPG. See also: ecg_simulate(), rsp_simulate(), eda_simulate(), emg_simulate()

Examples

>>> import neurokit2 as nk >>> >>> ppg= nk.ppg_simulate(duration=40, sampling_rate=500, heart_rate=75, random_

˓→state=42)

7.3 HRV hrv(peaks, sampling_rate=1000, show=False) Computes indices of Heart Rate Variability (HRV). Computes HRV indices in the time-, frequency-, and nonlinear domain. Note that a minimum duration of the signal containing the peaks is recommended for some HRV indices to be meaninful. For instance, 1, 2 and 5 minutes of high quality signal are the recomended minima for HF, LF and LF/HF, respectively. See references for details. Parameters • peaks (dict) – Samples at which cardiac extrema (i.e., R-peaks, systolic peaks) occur. Dictionary returned by ecg_findpeaks, ecg_peaks, ppg_findpeaks, or ppg_peaks. • sampling_rate (int, optional) – Sampling rate (Hz) of the continuous cardiac signal in which the peaks occur. Should be at least twice as high as the highest frequency in vhf. By default 1000. • show (bool, optional) – If True, returns the plots that are generates for each of the do- mains. Returns DataFrame – Contains HRV metrics from three domains: - frequency (see hrv_frequency) - time (see hrv_time) - non-linear (see `hrv_nonlinear

128 Chapter 7. Functions NeuroKit2, Release 0.0.39

See also: ecg_peaks(), ppg_peaks(), hrv_time(), hrv_frequency(), hrv_nonlinear()

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Find peaks >>> peaks, info= nk.ecg_peaks(data["ECG"], sampling_rate=100) >>> >>> # Compute HRV indices >>> hrv_indices= nk.hrv(peaks, sampling_rate=100, show=True) >>> hrv_indices

References

• Stein, P. K. (2002). Assessing heart rate variability from real-world Holter reports. Cardiac

electrophysiology review, 6(3), 239-244. • Shaffer, F., & Ginsberg, J. P. (2017). An overview of heart rate variability metrics and norms. Frontiers in public health, 5, 258. hrv_frequency(peaks, sampling_rate=1000, ulf=0, 0.0033, vlf=0.0033, 0.04, lf=0.04, 0.15, hf=0.15, 0.4, vhf=0.4, 0.5, psd_method='welch', show=False, silent=True, **kwargs) Computes frequency-domain indices of Heart Rate Variability (HRV). Note that a minimum duration of the signal containing the peaks is recommended for some HRV indices to be meaningful. For instance, 1, 2 and 5 minutes of high quality signal are the recomended minima for HF, LF and LF/HF, respectively. See references for details. Parameters • peaks (dict) – Samples at which cardiac extrema (i.e., R-peaks, systolic peaks) occur. Dictionary returned by ecg_findpeaks, ecg_peaks, ppg_findpeaks, or ppg_peaks. • sampling_rate (int, optional) – Sampling rate (Hz) of the continuous cardiac signal in which the peaks occur. Should be at least twice as high as the highest frequency in vhf. By default 1000. • ulf (tuple, optional) – Upper and lower limit of the ultra-low frequency band. By default (0, 0.0033). • vlf (tuple, optional) – Upper and lower limit of the very-low frequency band. By default (0.0033, 0.04). • lf (tuple, optional) – Upper and lower limit of the low frequency band. By default (0.04, 0.15). • hf (tuple, optional) – Upper and lower limit of the high frequency band. By default (0.15, 0.4). • vhf (tuple, optional) – Upper and lower limit of the very-high frequency band. By default (0.4, 0.5).

7.3. HRV 129 NeuroKit2, Release 0.0.39

• psd_method (str) – Method used for spectral density estimation. For details see sig- nal.signal_power. By default “welch”. • silent (bool) – If False, warnings will be printed. Default to True. • show (bool) – If True, will plot the power in the different frequency bands. • **kwargs (optional) – Other arguments. Returns DataFrame – Contains frequency domain HRV metrics: - ULF: The spectral power den- sity pertaining to ultra low frequency band i.e., .0 to .0033 Hz by default. - VLF: The spectral power density pertaining to very low frequency band i.e., .0033 to .04 Hz by default. - LF: The spectral power density pertaining to low frequency band i.e., .04 to .15 Hz by default. - HF: The spectral power density pertaining to high frequency band i.e., .15 to .4 Hz by default. - VHF: The variability, or signal power, in very high frequency i.e., .4 to .5 Hz by default. - LFn: The normalized low frequency, obtained by dividing the low frequency power by the total power. - HFn: The normalized high frequency, obtained by dividing the low frequency power by the total power. - LnHF: The log transformed HF. See also: ecg_peaks(), ppg_peaks(), hrv_summary(), hrv_time(), hrv_nonlinear()

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Find peaks >>> peaks, info= nk.ecg_peaks(data["ECG"], sampling_rate=100) >>> >>> # Compute HRV indices >>> hrv= nk.hrv_frequency(peaks, sampling_rate=100, show=True)

References

• Stein, P. K. (2002). Assessing heart rate variability from real-world Holter reports. Cardiac

electrophysiology review, 6(3), 239-244. • Shaffer, F., & Ginsberg, J. P. (2017). An overview of heart rate variability metrics and norms. Frontiers in public health, 5, 258. hrv_nonlinear(peaks, sampling_rate=1000, show=False) Computes nonlinear indices of Heart Rate Variability (HRV). See references for details.

Parameters • peaks (dict) – Samples at which cardiac extrema (i.e., R-peaks, systolic peaks) occur. Dictionary returned by ecg_findpeaks, ecg_peaks, ppg_findpeaks, or ppg_peaks. • sampling_rate (int, optional) – Sampling rate (Hz) of the continuous cardiac signal in which the peaks occur. Should be at least twice as high as the highest frequency in vhf. By default 1000.

130 Chapter 7. Functions NeuroKit2, Release 0.0.39

• show (bool, optional) – If True, will return a Poincaré plot, a scattergram, which plots each RR interval against the next successive one. The ellipse centers around the average RR interval. By default False. Returns DataFrame – Contains non-linear HRV metrics: • Characteristics of the Poincaré Plot Geometry: – SD1: SD1 is a measure of the spread of RR intervals on the Poincaré plot perpendicular to the line of identity. It is an index of short-term RR interval fluctuations, i.e., beat-to-beat variability. It is equivalent (although on another scale) to RMSSD, and therefore it is redundant to report correlations with both (Ciccone, 2017). – SD2: SD2 is a measure of the spread of RR intervals on the Poincaré plot along the line of identity. It is an index of long-term RR interval fluctuations. – SD1SD2: the ratio between short and long term fluctuations of the RR inter- vals (SD1 divided by SD2). –S : Area of ellipse described by SD1 and SD2 ( * SD1 * SD2). It is proportional to SD1SD2. – CSI: The Cardiac Sympathetic Index (Toichi, 1997), calculated by dividing the longitudinal variability of the Poincaré plot (4*SD2) by its transverse variability (4*SD1). – CVI: The Cardiac Vagal Index (Toichi, 1997), equal to the logarithm of the product of longitudinal (4*SD2) and transverse variability (4*SD1). – CSI_Modified: The modified CSI (Jeppesen, 2014) obtained by dividing the square of the longitudinal variability by its transverse variability. • Indices of Heart Rate Asymmetry (HRA), i.e., asymmetry of the Poincaré plot (Yan, 2017): –GI : Guzik’s Index, defined as the distance of points above line of identity (LI) to LI divided by the distance of all points in Poincaré plot to LI except those that are located on LI. –SI : Slope Index, defined as the phase angle of points above LI divided by the phase angle of all points in Poincaré plot except those that are located on LI. –AI : Area Index, defined as the cumulative area of the sectors corresponding to the points that are located above LI divided by the cumulative area of sectors corresponding to all points in the Poincaré plot except those that are located on LI.

7.3. HRV 131 NeuroKit2, Release 0.0.39

–PI : Porta’s Index, defined as the number of points below LI divided by the total number of points in Poincaré plot except those that are located on LI. – SD1d and SD1a: short-term variance of contributions of decelerations (prolongations of RR intervals) and accelerations (shortenings of RR intervals), respectively (Piskorski, 2011). – C1d and C1a: the contributions of heart rate decelerations and accelerations to short-term HRV, respectively (Piskorski, 2011). – SD2d and SD2a: long-term variance of contributions of decelerations (prolongations of RR intervals) and accelerations (shortenings of RR intervals), respectively (Piskorski, 2011). – C2d and C2a: the contributions of heart rate decelerations and accelerations to long-term HRV, respectively (Piskorski, 2011). – SDNNd and SDNNa: total variance of contributions of decelerations (prolongations of RR intervals) and accelerations (shortenings of RR intervals), respectively (Piskorski, 2011). – Cd and Ca: the total contributions of heart rate decelerations and accelerations to HRV. • Indices of Heart Rate Fragmentation (Costa, 2017): – PIP: Percentage of inflection points of the RR intervals series. – IALS: Inverse of the average length of the acceleration/deceleration segments. – PSS: Percentage of short segments. – PAS: IPercentage of NN intervals in alternation segments. • Indices of Complexity: – ApEn: The approximate entropy measure of HRV, calculated by en- tropy_approximate(). – SampEn: The sample entropy measure of HRV, calculated by entropy_sample().

See also: ecg_peaks(), ppg_peaks(), hrv_frequency(), hrv_time(), hrv_summary()

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Find peaks >>> peaks, info= nk.ecg_peaks(data["ECG"], sampling_rate=100) >>> >>> # Compute HRV indices (continues on next page)

132 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) >>> hrv= nk.hrv_nonlinear(peaks, sampling_rate=100, show=True) >>> hrv

References

• Yan, C., Li, P., Ji, L., Yao, L., Karmakar, C., & Liu, C. (2017). Area asymmetry of heart

rate variability signal. Biomedical engineering online, 16(1), 112. • Ciccone, A. B., Siedlik, J. A., Wecht, J. M., Deckert, J. A., Nguyen, N. D., & Weir, J. P. (2017). Reminder: RMSSD and SD1 are identical heart rate variability metrics. Muscle & nerve, 56(4), 674- 678. • Shaffer, F., & Ginsberg, J. P. (2017). An overview of heart rate variability metrics and norms. Frontiers in public health, 5, 258. • Costa, M. D., Davis, R. B., & Goldberger, A. L. (2017). Heart rate fragmentation: a new approach to the analysis of cardiac interbeat interval dynamics. Front. Physiol. 8, 255 (2017). • Jeppesen, J., Beniczky, S., Johansen, P., Sidenius, P., & Fuglsang-Frederiksen, A. (2014). Using Lorenz plot and Cardiac Sympathetic Index of heart rate variability for detecting seizures for patients with epilepsy. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 4563-4566). IEEE. • Piskorski, J., & Guzik, P. (2011). Asymmetric properties of long-term and total heart rate variability. Medical & biological engineering & computing, 49(11), 1289-1297. • Stein, P. K. (2002). Assessing heart rate variability from real-world Holter reports. Cardiac electrophysiology review, 6(3), 239-244. • Brennan, M. et al. (2001). Do Existing Measures of Poincaré Plot Geometry Reflect Nonlinear Features of Heart Rate Variability?. IEEE Transactions on Biomedical Engineering, 48(11), 1342-1347. • Toichi, M., Sugiura, T., Murai, T., & Sengoku, A. (1997). A new method of assessing cardiac autonomic function and its comparison with spectral analysis and coefficient of variation of R–R interval. Jour- nal of the autonomic nervous system, 62(1-2), 79-84. hrv_time(peaks, sampling_rate=1000, show=False) Computes time-domain indices of Heart Rate Variability (HRV). See references for details.

Parameters • peaks (dict) – Samples at which cardiac extrema (i.e., R-peaks, systolic peaks) occur. Dictionary returned by ecg_findpeaks, ecg_peaks, ppg_findpeaks, or ppg_peaks. • sampling_rate (int, optional) – Sampling rate (Hz) of the continuous cardiac signal in which the peaks occur. Should be at least twice as high as the highest frequency in vhf. By default 1000. • show (bool) – If True, will plot the distribution of R-R intervals.

7.3. HRV 133 NeuroKit2, Release 0.0.39

Returns DataFrame – Contains time domain HRV metrics: - RMSSD: The square root of the mean of the sum of successive differences between adjacent RR intervals. It is equivalent (al- though on another scale) to SD1, and therefore it is redundant to report correlations with both (Ciccone, 2017). - MeanNN: The mean of the RR intervals. - SDNN: The standard deviation of the RR intervals. - SDSD: The standard deviation of the successive differences between RR intervals. - CVNN: The standard deviation of the RR intervals (SDNN) divided by the mean of the RR intervals (MeanNN). - CVSD: The root mean square of the sum of successive differences (RMSSD) divided by the mean of the RR intervals (MeanNN). - MedianNN: The median of the absolute values of the successive differences between RR intervals. - MadNN: The median absolute deviation of the RR intervals. - HCVNN: The median absolute devia- tion of the RR intervals (MadNN) divided by the median of the absolute differences of their successive differences (MedianNN). - IQRNN: The interquartile range (IQR) of the RR in- tervals. - pNN50: The proportion of RR intervals greater than 50ms, out of the total number of RR intervals. - pNN20: The proportion of RR intervals greater than 20ms, out of the total number of RR intervals. - TINN: A geometrical parameter of the HRV, or more specifically, the baseline width of the RR intervals distribution obtained by triangular interpolation, where the error of least squares determines the triangle. It is an approximation of the RR interval distribution. - HTI: The HRV triangular index, measuring the total number of RR intervals divded by the height of the RR intervals histogram.

See also: ecg_peaks(), ppg_peaks(), hrv_frequency(), hrv_summary(), hrv_nonlinear()

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Find peaks >>> peaks, info= nk.ecg_peaks(data["ECG"], sampling_rate=100) >>> >>> # Compute HRV indices >>> hrv= nk.hrv_time(peaks, sampling_rate=100, show=True)

References

• Ciccone, A. B., Siedlik, J. A., Wecht, J. M., Deckert, J. A., Nguyen, N. D., & Weir, J. P.

(2017). Reminder: RMSSD and SD1 are identical heart rate variability metrics. Muscle & nerve, 56(4), 674- 678. • Stein, P. K. (2002). Assessing heart rate variability from real-world Holter reports. Cardiac electrophysiology review, 6(3), 239-244. • Shaffer, F., & Ginsberg, J. P. (2017). An overview of heart rate variability metrics and norms. Frontiers in public health, 5, 258.

134 Chapter 7. Functions NeuroKit2, Release 0.0.39

7.4 RSP

Submodule for NeuroKit. rsp_amplitude(rsp_cleaned, peaks, troughs=None, interpolation_method='monotone_cubic') Compute respiratory amplitude. Compute respiratory amplitude given the raw respiration signal and its extrema. Parameters • rsp_cleaned (Union[list, np.array, pd.Series]) – The cleaned respiration channel as re- turned by rsp_clean(). • peaks (list or array or DataFrame or Series or dict) – The samples at which the inhalation peaks occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with rsp_findpeaks(). • troughs (list or array or DataFrame or Series or dict) – The samples at which the inhala- tion troughs occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with rsp_findpeaks(). • interpolation_method (str) – Method used to interpolate the amplitude between peaks. See signal_interpolate(). ‘monotone_cubic’ is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiolog- ically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity. Returns array – A vector containing the respiratory amplitude. See also: rsp_clean(), rsp_peaks(), signal_rate(), rsp_process(), rsp_plot()

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> rsp= nk.rsp_simulate(duration=90, respiratory_rate=15) >>> cleaned= nk.rsp_clean(rsp, sampling_rate=1000) >>> info, signals= nk.rsp_peaks(cleaned) >>> >>> amplitude= nk.rsp_amplitude(cleaned, signals) >>> fig= nk.signal_plot(pd.DataFrame({"RSP": rsp,"Amplitude": amplitude}),

˓→subplots=True) >>> fig rsp_analyze(data, sampling_rate=1000, method='auto') Performs RSP analysis on either epochs (event-related analysis) or on longer periods of data such as resting- state data. Parameters • data (dict or DataFrame) – A dictionary of epochs, containing one DataFrame per epoch, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually ob- tained via epochs_to_df(). Can also take a DataFrame of processed signals from a longer period of data, typically generated by rsp_process() or bio_process(). Can also take a dict containing sets of separate periods of data.

7.4. RSP 135 NeuroKit2, Release 0.0.39

• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Defaults to 1000Hz. • method (str) – Can be one of ‘event-related’ for event-related analysis on epochs, or ‘interval-related’ for analysis on longer periods of data. Defaults to ‘auto’ where the right method will be chosen based on the mean duration of the data (‘event-related’ for duration under 10s). Returns DataFrame – A dataframe containing the analyzed RSP features. If event-related anal- ysis is conducted, each epoch is indicated by the Label column. See rsp_eventrelated() and rsp_intervalrelated() docstrings for details. See also: bio_process(), rsp_process(), epochs_create(), rsp_eventrelated(), rsp_intervalrelated()

Examples

>>> import neurokit2 as nk

>>> # Example 1: Download the data for event-related analysis >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data for event-related analysis >>> df, info= nk.bio_process(rsp=data["RSP"], sampling_rate=100) >>> events= nk.events_find(data["Photosensor"], threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-0.1,

˓→epochs_end=1.9) >>> >>> # Analyze >>> nk.rsp_analyze(epochs, sampling_rate=100) >>> >>> # Example 2: Download the resting-state data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Process the data >>> df, info= nk.rsp_process(data["RSP"], sampling_rate=100) >>> >>> # Analyze >>> nk.rsp_analyze(df, sampling_rate=100) rsp_clean(rsp_signal, sampling_rate=1000, method='khodadad2018') Preprocess a respiration (RSP) signal. Clean a respiration signal using different sets of parameters, such as ‘khodadad2018’ (linear detrending fol- lowed by a fifth order 2Hz low-pass IIR Butterworth filter) or BioSPPy (second order0.1 - 0.35 Hz bandpass Butterworth filter followed by a constant detrending). Parameters • rsp_signal (Union[list, np.array, pd.Series]) – The raw respiration channel (as measured, for instance, by a respiration belt). • sampling_rate (int) – The sampling frequency of rsp_signal (in Hz, i.e., sam- ples/second).

136 Chapter 7. Functions NeuroKit2, Release 0.0.39

• method (str) – The processing pipeline to apply. Can be one of “khodadad2018” (default) or “biosppy”. Returns array – Vector containing the cleaned respiratory signal. See also: rsp_findpeaks(), signal_rate(), rsp_amplitude(), rsp_process(), rsp_plot()

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=30, sampling_rate=50, noise=0.01) >>> signals= pd.DataFrame({"RSP_Raw": rsp, ... "RSP_Khodadad2018": nk.rsp_clean(rsp, sampling_

˓→rate=50, method="khodadad2018"), ... "RSP_BioSPPy": nk.rsp_clean(rsp, sampling_rate=50,

˓→method="biosppy")}) >>> fig= signals.plot() >>> fig

References

• Khodadad et al. (2018) rsp_eventrelated(epochs, silent=False) Performs event-related RSP analysis on epochs. Parameters • epochs (Union[dict, pd.DataFrame]) – A dict containing one DataFrame per event/trial, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually ob- tained via epochs_to_df(). • silent (bool) – If True, silence possible warnings. Returns DataFrame – A dataframe containing the analyzed RSP features for each epoch, with each epoch indicated by the Label column (if not present, by the Index col- umn). The analyzed features consist of the following: - “RSP_Rate_Max”: the max- imum respiratory rate after stimulus onset. - “RSP_Rate_Min”: the minimum respi- ratory rate after stimulus onset. - “RSP_Rate_Mean”: the mean respiratory rate af- ter stimulus onset. - “RSP_Rate_Max_Time”: the time at which maximum respiratory rate occurs. - “RSP_Rate_Min_Time”: the time at which minimum respiratory rate oc- curs. - “RSP_Amplitude_Max”: the maximum respiratory amplitude after stimulus on- set. - “RSP_Amplitude_Min”: the minimum respiratory amplitude after stimulus on- set. - “RSP_Amplitude_Mean”: the mean respiratory amplitude after stimulus onset. - “RSP_Phase”: indication of whether the onset of the event concurs with respiratory inspi- ration (1) or expiration (0). - “RSP_PhaseCompletion”: indication of the stage of the current respiration phase (0 to 1) at the onset of the event. See also: events_find(), epochs_create(), bio_process()

7.4. RSP 137 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Example with simulated data >>> rsp, info= nk.rsp_process(nk.rsp_simulate(duration=120)) >>> epochs= nk.epochs_create(rsp, events=[5000, 10000, 15000], epochs_start=-0.1,

˓→ epochs_end=1.9) >>> >>> # Analyze >>> rsp1= nk.rsp_eventrelated(epochs) >>> rsp1 >>> >>> # Example with real data >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data >>> df, info= nk.bio_process(rsp=data["RSP"], sampling_rate=100) >>> events= nk.events_find(data["Photosensor"], threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-0.1,

˓→epochs_end=2.9) >>> >>> # Analyze >>> rsp2= nk.rsp_eventrelated(epochs) >>> rsp2 rsp_findpeaks(rsp_cleaned, sampling_rate=1000, method='khodadad2018', amplitude_min=0.3) Extract extrema in a respiration (RSP) signal. Low-level function used by rsp_peaks() to identify inhalation peaks and exhalation troughs in a preprocessed respiration signal using different sets of parameters. See rsp_peaks() for details. Parameters • rsp_cleaned (Union[list, np.array, pd.Series]) – The cleaned respiration channel as re- turned by rsp_clean(). • sampling_rate (int) – The sampling frequency of ‘rsp_cleaned’ (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of “khodadad2018” (default) or “biosppy”. • amplitude_min (float) – Only applies if method is “khodadad2018”. Extrema that have a vertical distance smaller than (outlier_threshold * average vertical distance) to any direct neighbour are removed as false positive outliers. I.e., outlier_threshold should be a float with positive sign (the default is 0.3). Larger values of outlier_threshold correspond to more conservative thresholds (i.e., more extrema removed as outliers). Returns info (dict) – A dictionary containing additional information, in this case the samples at which inhalation peaks and exhalation troughs occur, accessible with the keys “RSP_Peaks”, and “RSP_Troughs”, respectively. See also: rsp_clean(), rsp_fixpeaks(), rsp_peaks(), signal_rate(), rsp_amplitude(), rsp_process(), rsp_plot()

138 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=30, respiratory_rate=15) >>> cleaned= nk.rsp_clean(rsp, sampling_rate=1000) >>> info= nk.rsp_findpeaks(cleaned) >>> fig= nk.events_plot([info["RSP_Peaks"], info["RSP_Troughs"]], cleaned) >>> fig rsp_fixpeaks(peaks, troughs=None) Correct RSP peaks. Low-level function used by rsp_peaks() to correct the peaks found by rsp_findpeaks(). Doesn’t do anything for now for RSP. See rsp_peaks() for details. Parameters • peaks (list or array or DataFrame or Series or dict) – The samples at which the inhalation peaks occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with rsp_findpeaks(). • troughs (list or array or DataFrame or Series or dict) – The samples at which the inhala- tion troughs occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with rsp_findpeaks(). Returns info (dict) – A dictionary containing additional information, in this case the samples at which inhalation peaks and exhalation troughs occur, accessible with the keys “RSP_Peaks”, and “RSP_Troughs”, respectively. See also: rsp_clean(), rsp_findpeaks(), rsp_peaks(), rsp_amplitude(), rsp_process(), rsp_plot()

Examples

>>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=30, respiratory_rate=15) >>> cleaned= nk.rsp_clean(rsp, sampling_rate=1000) >>> info= nk.rsp_findpeaks(cleaned) >>> info= nk.rsp_fixpeaks(info) >>> fig= nk.events_plot([info["RSP_Peaks"], info["RSP_Troughs"]], cleaned) >>> fig rsp_intervalrelated(data, sampling_rate=1000) Performs RSP analysis on longer periods of data (typically > 10 seconds), such as resting-state data. Parameters • data (DataFrame or dict) – A DataFrame containing the different processed signal(s) as different columns, typically generated by rsp_process() or bio_process(). Can also take a dict containing sets of separately processed DataFrames. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Returns DataFrame – A dataframe containing the analyzed RSP features. The analyzed features consist of the following: - “RSP_Rate_Mean”: the mean heart rate. -

7.4. RSP 139 NeuroKit2, Release 0.0.39

“RSP_Amplitude_Mean”: the mean respiratory amplitude. - “RSP_RRV”: the different res- piratory rate variability metrices. See rsp_rrv() docstrings for details. See also: bio_process(), rsp_eventrelated()

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Process the data >>> df, info= nk.rsp_process(data["RSP"], sampling_rate=100)

>>> # Single dataframe is passed >>> nk.rsp_intervalrelated(df) >>> >>> epochs= nk.epochs_create(df, events=[0, 15000], sampling_rate=100, epochs_

˓→end=150) >>> nk.rsp_intervalrelated(epochs) rsp_peaks(rsp_cleaned, sampling_rate=1000, method='khodadad2018', amplitude_min=0.3) Identify extrema in a respiration (RSP) signal. This function rsp_findpeaks() and rsp_fixpeaks to identify and process inhalation peaks and exhalation troughs in a preprocessed respiration signal using different sets of parameters, such as: • `Khodadad et al. (2018) `_ • `BioSPPy `_ Parameters • rsp_cleaned (Union[list, np.array, pd.Series]) – The cleaned respiration channel as re- turned by rsp_clean(). • sampling_rate (int) – The sampling frequency of ‘rsp_cleaned’ (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of “khodadad2018” (default) or “biosppy”. • amplitude_min (float) – Only applies if method is “khodadad2018”. Extrema that have a vertical distance smaller than (outlier_threshold * average vertical distance) to any direct neighbour are removed as false positive outliers. i.e., outlier_threshold should be a float with positive sign (the default is 0.3). Larger values of outlier_threshold correspond to more conservative thresholds (i.e., more extrema removed as outliers). Returns • info (dict) – A dictionary containing additional information, in this case the sam- ples at which inhalation peaks and exhalation troughs occur, accessible with the keys “RSP_Peaks”, and “RSP_Troughs”, respectively.

140 Chapter 7. Functions NeuroKit2, Release 0.0.39

• peak_signal (DataFrame) – A DataFrame of same length as the input signal in which occurences of inhalation peaks and exhalation troughs are marked as “1” in lists of ze- ros with the same length as rsp_cleaned. Accessible with the keys “RSP_Peaks” and “RSP_Troughs” respectively. See also: rsp_clean(), signal_rate(), rsp_findpeaks(), rsp_fixpeaks(), rsp_amplitude(), rsp_process(), rsp_plot()

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> rsp= nk.rsp_simulate(duration=30, respiratory_rate=15) >>> cleaned= nk.rsp_clean(rsp, sampling_rate=1000) >>> peak_signal, info= nk.rsp_peaks(cleaned, sampling_rate=1000) >>> >>> data= pd.concat([pd.DataFrame({"RSP": rsp}), peak_signal], axis=1) >>> fig= nk.signal_plot(data) >>> fig rsp_phase(peaks, troughs=None, desired_length=None) Compute respiratory phase (inspiration and expiration). Finds the respiratory phase, labelled as 1 for inspiration and 0 for expiration. Parameters • peaks (list or array or DataFrame or Series or dict) – The samples at which the inhalation peaks occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with rsp_findpeaks(). • troughs (list or array or DataFrame or Series or dict) – The samples at which the inhala- tion troughs occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with rsp_findpeaks(). • desired_length (int) – By default, the returned respiration rate has the same number of elements as peaks. If set to an integer, the returned rate will be interpolated between peaks over desired_length samples. Has no effect if a DataFrame is passed in as the peaks argument. Returns signals (DataFrame) – A DataFrame of same length as rsp_signal containing the fol- lowing columns: - “RSP_Inspiration”: breathing phase, marked by “1” for inspiration and “0” for expiration. - “RSP_Phase_Completion”: breathing phase completion, expressed in percentage (from 0 to 1), representing the stage of the current respiratory phase. See also: rsp_clean(), rsp_peaks(), rsp_amplitude(), rsp_process(), rsp_plot()

7.4. RSP 141 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=30, respiratory_rate=15) >>> cleaned= nk.rsp_clean(rsp, sampling_rate=1000) >>> peak_signal, info= nk.rsp_peaks(cleaned) >>> >>> phase= nk.rsp_phase(peak_signal, desired_length=len(cleaned)) >>> fig= nk.signal_plot([rsp, phase], standardize=True) >>> fig rsp_plot(rsp_signals, sampling_rate=None) Visualize respiration (RSP) data. Parameters • rsp_signals (DataFrame) – DataFrame obtained from rsp_process(). • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second).

Examples

>>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=90, respiratory_rate=15) >>> rsp_signals, info= nk.rsp_process(rsp, sampling_rate=1000) >>> fig= nk.rsp_plot(rsp_signals) >>> fig

Returns fig – Figure representing a plot of the processed rsp signals.

See also: rsp_process() rsp_process(rsp_signal, sampling_rate=1000, method='khodadad2018') Process a respiration (RSP) signal. Convenience function that automatically processes a respiration signal with one of the following methods: • `Khodadad et al. (2018) `_ • `BioSPPy `_ Parameters • rsp_signal (Union[list, np.array, pd.Series]) – The raw respiration channel (as measured, for instance, by a respiration belt). • sampling_rate (int) – The sampling frequency of rsp_signal (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of “khodadad2018” (default) or “biosppy”. Returns

142 Chapter 7. Functions NeuroKit2, Release 0.0.39

• signals (DataFrame) – A DataFrame of same length as rsp_signal containing the fol- lowing columns: - “RSP_Raw”: the raw signal. - “RSP_Clean”: the cleaned signal. - “RSP_Peaks”: the inhalation peaks marked as “1” in a list of zeros. - “RSP_Troughs”: the exhalation troughs marked as “1” in a list of zeros. - “RSP_Rate”: breathing rate interpolated between inhalation peaks. - “RSP_Amplitude”: breathing amplitude inter- polated between inhalation peaks. - “RSP_Phase”: breathing phase, marked by “1” for inspiration and “0” for expiration. - “RSP_PhaseCompletion”: breathing phase comple- tion, expressed in percentage (from 0 to 1), representing the stage of the current respira- tory phase. • info (dict) – A dictionary containing the samples at which inhalation peaks and exhalation troughs occur, accessible with the keys “RSP_Peaks”, and “RSP_Troughs”, respectively. See also: rsp_clean(), rsp_findpeaks(), signal_rate(), rsp_amplitude(), rsp_plot()

Examples

>>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=90, respiratory_rate=15) >>> signals, info= nk.rsp_process(rsp, sampling_rate=1000) >>> fig= nk.rsp_plot(signals) >>> fig rsp_rate(peaks, sampling_rate=1000, desired_length=None, interpolation_method='monotone_cubic') Calculate signal rate from a series of peaks. This function can also be called either via ecg_rate(), `ppg_rate() or rsp_rate() (aliases provided for consistency). Parameters • peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with sig- nal_findpeaks(). If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., ecg_findpeaks() or rsp_findpeaks(). • sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000. • desired_length (int) – If left at the default None, the returned rated will have the same number of elements as peaks. If set to a value larger than the sample at which the last peak occurs in the signal (i.e., peaks[-1]), the returned rate will be interpolated between peaks over desired_length samples. To interpolate the rate over the entire duration of the signal, set desired_length to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to None. • interpolation_method (str) – Method used to interpolate the rate between peaks. See signal_interpolate(). ‘monotone_cubic’ is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiolog- ically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity. Returns array – A vector containing the rate.

7.4. RSP 143 NeuroKit2, Release 0.0.39

See also: signal_findpeaks(), signal_fixpeaks(), signal_plot()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1) >>> info= nk.signal_findpeaks(signal) >>> >>> rate= nk.signal_rate(peaks=info["Peaks"], desired_length=len(signal)) >>> fig= nk.signal_plot(rate) >>> fig rsp_rrv(rsp_rate, peaks=None, sampling_rate=1000, show=False, silent=True) Computes time domain and frequency domain features for Respiratory Rate Variability (RRV) analysis. Parameters • rsp_rate (array) – Array containing the respiratory rate, produced by signal_rate(). • peaks (dict) – The samples at which the inhalation peaks occur. Dict returned by rsp_peaks(). Defaults to None. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). • show (bool) – If True, will return a Poincaré plot, a scattergram, which plots each breath- to-breath interval against the next successive one. The ellipse centers around the average breath-to-breath interval. Defaults to False. • silent (bool) – If False, warnings will be printed. Default to True. Returns DataFrame – DataFrame consisting of the computed RRV metrics, which includes: - “RRV_SDBB”: the standard deviation of the breath-to-breath intervals. - “RRV_RMSSD”: the root mean square of successive differences of the breath-to-breath intervals. - “RRV_SDSD”: the standard deviation of the successive differences between adjacent breath-to-breath inter- vals. - “RRV_BBx”: the number of successive interval differences that are greater than x seconds. - “RRV-pBBx”: the proportion of breath-to-breath intervals that are greater than x seconds, out of the total number of intervals. - “RRV_VLF”: spectral power density pertaining to very low frequency band i.e., 0 to .04 Hz by default. - “RRV_LF”: spectral power density pertaining to low frequency band i.e., .04 to .15 Hz by default. - “RRV_HF”: spectral power density pertaining to high frequency band i.e., .15 to .4 Hz by default. - “RRV_LFHF”: the ratio of low frequency power to high frequency power. - “RRV_LFn”: the normalized low fre- quency, obtained by dividing the low frequency power by the total power. - “RRV_HFn”: the normalized high frequency, obtained by dividing the low frequency power by total power. - “RRV_SD1”: SD1 is a measure of the spread of breath-to-breath intervals on the Poincaré plot perpendicular to the line of identity. It is an index of short-term variability. - “RRV_SD2”: SD2 is a measure of the spread of breath-to-breath intervals on the Poincaré plot along the line of identity. It is an index of long-term variability. - “RRV_SD2SD1”: the ratio between short and long term fluctuations of the breath-to-breath intervals (SD2 divided by SD1). - “RRV_ApEn”: the approximate entropy of RRV, calculated by entropy_approximate().-“RRV_SampEn”: the sample entropy of RRV, calculated by entropy_sample().-“RRV_DFA_1”: the “short-term” fluctuation value generated from Detrended Fluctuation Analysis i.e. the root mean square de- viation from the fitted trend of the breath-to-breath intervals. Will only be computed if mora than 160 breath cycles in the signal. - “RRV_DFA_2”: the long-term fluctuation value. Will only be computed if mora than 640 breath cycles in the signal.

144 Chapter 7. Functions NeuroKit2, Release 0.0.39

See also: signal_rate(), rsp_peaks(), signal_power(), entropy_sample(), entropy_approximate()

Examples

>>> import neurokit2 as nk >>> >>> rsp= nk.rsp_simulate(duration=90, respiratory_rate=15) >>> rsp, info= nk.rsp_process(rsp) >>> rrv= nk.rsp_rrv(rsp, show=True)

References

• Soni, R., & Muniyandi, M. (2019). Breath rate variability: a novel measure to study the meditation

effects. International Journal of Yoga, 12(1), 45. rsp_simulate(duration=10, length=None, sampling_rate=1000, noise=0.01, respiratory_rate=15, method='breathmetrics', random_state=None) Simulate a respiratory signal. Generate an artificial (synthetic) respiratory signal of a given duration and rate. Parameters • duration (int) – Desired length of duration (s). • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). • length (int) – The desired length of the signal (in samples). • noise (float) – Noise level (amplitude of the laplace noise). • respiratory_rate (float) – Desired number of breath cycles in one minute. • method (str) – The model used to generate the signal. Can be ‘sinusoidal’ for a simu- lation based on a trigonometric sine wave that roughly approximates a single respiratory cycle. If ‘breathmetrics’ (default), will use an advanced model desbribed Noto, et al. (2018). • random_state (int) – Seed for the random number generator. Returns array – Vector containing the respiratory signal.

Examples

>>> import pandas as pd >>> import numpy as np >>> import neurokit2 as nk >>> >>> rsp1= nk.rsp_simulate(duration=30, method="sinusoidal") >>> rsp2= nk.rsp_simulate(duration=30, method="breathmetrics") >>> fig= pd.DataFrame({"RSP_Simple": rsp1,"RSP_Complex": rsp2}).

˓→plot(subplots=True) >>> fig

7.4. RSP 145 NeuroKit2, Release 0.0.39

References

Noto, T., Zhou, G., Schuele, S., Templer, J., & Zelano, C. (2018). Automated analysis of breathing waveforms using BreathMetrics: A respiratory signal processing toolbox. Chemical Senses, 43(8), 583–597. https://doi. org/10.1093/chemse/bjy045 See also: rsp_clean(), rsp_findpeaks(), signal_rate(), rsp_process(), rsp_plot()

7.5 EDA

Submodule for NeuroKit. eda_analyze(data, sampling_rate=1000, method='auto') Performs EDA analysis on either epochs (event-related analysis) or on longer periods of data such as resting- state data. Parameters • data (Union[dict, pd.DataFrame]) – A dictionary of epochs, containing one DataFrame per epoch, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually obtained via epochs_to_df(). Can also take a DataFrame of processed signals from a longer period of data, typically generated by eda_process() or bio_process(). Can also take a dict containing sets of separate periods of data. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Defaults to 1000Hz. • method (str) – Can be one of ‘event-related’ for event-related analysis on epochs, or ‘interval-related’ for analysis on longer periods of data. Defaults to ‘auto’ where the right method will be chosen based on the mean duration of the data (‘event-related’ for duration under 10s). Returns DataFrame – A dataframe containing the analyzed EDA features. If event-related anal- ysis is conducted, each epoch is indicated by the Label column. See eda_eventrelated() and eda_intervalrelated() docstrings for details. See also: bio_process(), eda_process(), epochs_create(), eda_eventrelated(), eda_intervalrelated()

Examples

>>> import neurokit2 as nk

>>> # Example 1: Download the data for event-related analysis >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data for event-related analysis >>> df, info= nk.bio_process(eda=data["EDA"], sampling_rate=100) >>> events= nk.events_find(data["Photosensor"], threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-0.1,

˓→epochs_end=1.9) (continues on next page)

146 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) >>> >>> # Analyze >>> nk.eda_analyze(epochs, sampling_rate=100) >>> >>> # Example 2: Download the resting-state data >>> data= nk.data("bio_resting_8min_100hz") >>> >>> # Process the data >>> df, info= nk.eda_process(data["EDA"], sampling_rate=100) >>> >>> # Analyze >>> nk.eda_analyze(df, sampling_rate=100) eda_autocor(eda_cleaned, sampling_rate=1000, lag=4) Computes autocorrelation measure of raw EDA signal i.e., the correlation between the time series data and a specified time-lagged version of itself. Parameters • eda_cleaned (Union[list, np.array, pd.Series]) – The cleaned EDA signal. • sampling_rate (int) – The sampling frequency of raw EDA signal (in Hz, i.e., sam- ples/second). Defaults to 1000Hz. • lag (int) – Time lag in seconds. Defaults to 4 seconds to avoid autoregressive correlations approaching 1, as recommended by Halem et al. (2020). Returns float – Autocorrelation index of the eda signal. See also: eda_simulate(), eda_clean()

Examples

>>> import neurokit2 as nk >>> >>> # Simulate EDA signal >>> eda_signal= nk.eda_simulate(duration=5, scr_number=5, drift=0.1) >>> eda_cleaned= nk.eda_clean(eda_signal) >>> cor= nk.eda_autocor(eda_cleaned) >>> cor

References

• Halem, S., van Roekel, E., Kroencke, L., Kuper, N., & Denissen, J. (2020). Moments That Matter? On the Complexity of Using Triggers Based on Skin Conductance to Sample Arousing Events Within an Experience Sampling Framework. European Journal of Personality. eda_changepoints(eda_cleaned) Calculate the number of change points using of the skin conductance signal in terms of mean and variance. Defaults to an algorithm penalty of 10000, as recommended by Halem et al. (2020). Parameters eda_cleaned (Union[list, np.array, pd.Series]) – The cleaned EDA signal. Returns float – Number of changepoints in the

7.5. EDA 147 NeuroKit2, Release 0.0.39

See also: eda_simulate()

Examples

>>> import neurokit2 as nk >>> >>> # Simulate EDA signal >>> eda_signal= nk.eda_simulate(duration=5, scr_number=5, drift=0.1) >>> eda_cleaned= nk.eda_clean(eda_signal) >>> changepoints= nk.eda_changepoints(eda_cleaned) >>> changepoints

References

• Halem, S., van Roekel, E., Kroencke, L., Kuper, N., & Denissen, J. (2020). Moments That Matter? On the Complexity of Using Triggers Based on Skin Conductance to Sample Arousing Events Within an Experience Sampling Framework. European Journal of Personality. eda_clean(eda_signal, sampling_rate=1000, method='neurokit') Preprocess Electrodermal Activity (EDA) signal. Parameters • eda_signal (Union[list, np.array, pd.Series]) – The raw EDA signal. • sampling_rate (int) – The sampling frequency of rsp_signal (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of ‘neurokit’ (default) or ‘biosppy’. Returns array – Vector containing the cleaned EDA signal. See also: eda_simulate(), eda_findpeaks(), eda_process(), eda_plot()

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> eda= nk.eda_simulate(duration=30, sampling_rate=100, scr_number=10, noise=0.

˓→01, drift=0.02) >>> signals= pd.DataFrame({"EDA_Raw": eda, ... "EDA_BioSPPy": nk.eda_clean(eda, sampling_rate=100,

˓→method='biosppy'), ... "EDA_NeuroKit": nk.eda_clean(eda, sampling_rate=100,

˓→method='neurokit')}) >>> fig= signals.plot() >>> fig eda_eventrelated(epochs, silent=False) Performs event-related EDA analysis on epochs.

148 Chapter 7. Functions NeuroKit2, Release 0.0.39

Parameters • epochs (Union[dict, pd.DataFrame]) – A dict containing one DataFrame per event/trial, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually ob- tained via epochs_to_df(). • silent (bool) – If True, silence possible warnings. Returns DataFrame – A dataframe containing the analyzed EDA features for each epoch, with each epoch indicated by the Label column (if not present, by the Index column). The analyzed features consist the following: • ”EDA_SCR”: indication of whether Skin Conductance Response (SCR) occurs following the event (1 if an SCR onset is present and 0 if absent) and if so, its corresponding peak amplitude, time of peak, rise and recovery time. If there is no occurrence of SCR, nans are displayed for the below features. •” EDA_Peak_Amplitude”: the maximum amplitude of the phasic component of the signal. • ”SCR_Peak_Amplitude”: the peak amplitude of the first SCR in each epoch. • ”SCR_Peak_Amplitude_Time”: the timepoint of each first SCR peak amplitude. • ”SCR_RiseTime”: the risetime of each first SCR i.e., the time it takes for SCR to reach peak amplitude from onset. • ”SCR_RecoveryTime”: the half-recovery time of each first SCR i.e., the time it takes for SCR to decrease to half amplitude. See also: events_find(), epochs_create(), bio_process()

Examples

>>> import neurokit2 as nk >>> >>> # Example with simulated data >>> eda= nk.eda_simulate(duration=15, scr_number=3) >>> >>> # Process data >>> eda_signals, info= nk.eda_process(eda, sampling_rate=1000) >>> epochs= nk.epochs_create(eda_signals, events=[5000, 10000, 15000], sampling_

˓→rate=1000, ... epochs_start=-0.1, epochs_end=1.9) >>> >>> # Analyze >>> nk.eda_eventrelated(epochs) >>> >>> # Example with real data >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data >>> df, info= nk.bio_process(eda=data["EDA"], sampling_rate=100) >>> events= nk.events_find(data["Photosensor"], threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-0.1,

˓→epochs_end=6.9) (continues on next page)

7.5. EDA 149 NeuroKit2, Release 0.0.39

(continued from previous page) >>> >>> # Analyze >>> nk.eda_eventrelated(epochs) eda_findpeaks(eda_phasic, sampling_rate=1000, method='neurokit', amplitude_min=0.1) Identify Skin Conductance Responses (SCR) in Electrodermal Activity (EDA). Low-level function used by eda_peaks() to identify Skin Conductance Responses (SCR) peaks in the phasic component of Electrodermal Activity (EDA) with different possible methods. See eda_peaks() for details. Parameters • eda_phasic (Union[list, np.array, pd.Series]) – The phasic component of the EDA signal (from eda_phasic()). • sampling_rate (int) – The sampling frequency of the EDA signal (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of “neurokit” (default), “gamboa2008”, “kim2004” (the default in BioSPPy) or “vanhalem2020”. • amplitude_min (float) – Only used if ‘method’ is ‘neurokit’ or ‘kim2004’. Minimum threshold by which to exclude SCRs (peaks) as relative to the largest amplitude in the signal. Returns info (dict) – A dictionary containing additional information, in this case the aplitude of the SCR, the samples at which the SCR onset and the SCR peaks occur. Accessible with the keys “SCR_Amplitude”, “SCR_Onsets”, and “SCR_Peaks” respectively. See also: eda_simulate(), eda_clean(), eda_phasic(), eda_fixpeaks(), eda_peaks(), eda_process(), eda_plot()

Examples

>>> import neurokit2 as nk >>> >>> # Get phasic component >>> eda_signal= nk.eda_simulate(duration=30, scr_number=5, drift=0.1, noise=0) >>> eda_cleaned= nk.eda_clean(eda_signal) >>> eda= nk.eda_phasic(eda_cleaned) >>> eda_phasic= eda["EDA_Phasic"].values >>> >>> # Find peaks >>> gamboa2008= nk.eda_findpeaks(eda_phasic, method="gamboa2008") >>> kim2004= nk.eda_findpeaks(eda_phasic, method="kim2004") >>> neurokit= nk.eda_findpeaks(eda_phasic, method="neurokit") >>> vanhalem2020= nk.eda_findpeaks(eda_phasic, method="vanhalem2020") >>> fig= nk.events_plot([gamboa2008["SCR_Peaks"], kim2004["SCR_Peaks"],

˓→vanhalem2020["SCR_Peaks"], ... neurokit["SCR_Peaks"]], eda_phasic) >>> fig

150 Chapter 7. Functions NeuroKit2, Release 0.0.39

References

• Gamboa, H. (2008). Multi-modal behavioral biometrics based on hci and electrophysiology. PhD The- sisUniversidade. • Kim, K. H., Bang, S. W., & Kim, S. R. (2004). Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing, 42(3), 419-427. • van Halem, S., Van Roekel, E., Kroencke, L., Kuper, N., & Denissen, J. (2020). Moments That Matter? On the Complexity of Using Triggers Based on Skin Conductance to Sample Arousing Events Within an Experience Sampling Framework. European Journal of Personality. eda_fixpeaks(peaks, onsets=None, height=None) Correct Skin Conductance Responses (SCR) peaks. Low-level function used by eda_peaks() to correct the peaks found by eda_findpeaks(). Doesn’t do anything for now for EDA. See eda_peaks() for details. Parameters • peaks (list or array or DataFrame or Series or dict) – The samples at which the SCR peaks occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with eda_findpeaks(). • onsets (list or array or DataFrame or Series or dict) – The samples at which the SCR onsets occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with eda_findpeaks(). Defaults to None. • height (list or array or DataFrame or Series or dict) – The samples at which the ampli- tude of the SCR peaks occur. If a dict or a DataFrame is passed, it is assumed that these containers were obtained with eda_findpeaks(). Defaults to None. Returns info (dict) – A dictionary containing additional information, in this case the aplitude of the SCR, the samples at which the SCR onset and the SCR peaks occur. Accessible with the keys “SCR_Amplitude”, “SCR_Onsets”, and “SCR_Peaks” respectively. See also: eda_simulate(), eda_clean(), eda_phasic(), eda_findpeaks(), eda_peaks(), eda_process(), eda_plot()

Examples

>>> import neurokit2 as nk >>> >>> # Get phasic component >>> eda_signal= nk.eda_simulate(duration=30, scr_number=5, drift=0.1, noise=0) >>> eda_cleaned= nk.eda_clean(eda_signal) >>> eda= nk.eda_phasic(eda_cleaned) >>> eda_phasic= eda["EDA_Phasic"].values >>> >>> # Find and fix peaks >>> info= nk.eda_findpeaks(eda_phasic) >>> info= nk.eda_fixpeaks(info) >>> >>> fig= nk.events_plot(info["SCR_Peaks"], eda_phasic) >>> fig

7.5. EDA 151 NeuroKit2, Release 0.0.39 eda_intervalrelated(data) Performs EDA analysis on longer periods of data (typically > 10 seconds), such as resting-state data. Parameters data (Union[dict, pd.DataFrame]) – A DataFrame containing the different processed signal(s) as different columns, typically generated by eda_process() or bio_process(). Can also take a dict containing sets of separately processed DataFrames. Returns DataFrame – A dataframe containing the analyzed EDA features. The analyzed features consist of the following: - “SCR_Peaks_N”: the number of occurrences of Skin Conductance Response (SCR). - “SCR_Peaks_Amplitude_Mean”: the mean amplitude of the SCR peak occurrences. See also: bio_process(), eda_eventrelated()

Examples

>>> import neurokit2 as nk >>> >>> # Download data >>> data= nk.data("bio_resting_8min_100hz") >>> >>> # Process the data >>> df, info= nk.eda_process(data["EDA"], sampling_rate=100) >>> >>> # Single dataframe is passed >>> nk.eda_intervalrelated(df) >>> >>> epochs= nk.epochs_create(df, events=[0, 25300], sampling_rate=100, epochs_

˓→end=20) >>> nk.eda_intervalrelated(epochs) eda_peaks(eda_phasic, sampling_rate=1000, method='neurokit', amplitude_min=0.1) Identify Skin Conductance Responses (SCR) in Electrodermal Activity (EDA). Identify Skin Conductance Responses (SCR) peaks in the phasic component of Electrodermal Activity (EDA) with different possible methods, such as: • `Gamboa, H. (2008) `_ - Kim et al. (2004) Parameters • eda_phasic (Union[list, np.array, pd.Series]) – The phasic component of the EDA signal (from eda_phasic()). • sampling_rate (int) – The sampling frequency of the EDA signal (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of “neurokit” (default), “gamboa2008” or “kim2004” (the default in BioSPPy). • amplitude_min (float) – Only used if ‘method’ is ‘neurokit’ or ‘kim2004’. Minimum threshold by which to exclude SCRs (peaks) as relative to the largest amplitude in the signal. Returns

152 Chapter 7. Functions NeuroKit2, Release 0.0.39

• info (dict) – A dictionary containing additional information, in this case the aplitude of the SCR, the samples at which the SCR onset and the SCR peaks occur. Accessible with the keys “SCR_Amplitude”, “SCR_Onsets”, and “SCR_Peaks” respectively. • signals (DataFrame) – A DataFrame of same length as the input signal in which oc- curences of SCR peaks are marked as “1” in lists of zeros with the same length as eda_cleaned. Accessible with the keys “SCR_Peaks”. See also: eda_simulate(), eda_clean(), eda_phasic(), eda_process(), eda_plot()

Examples

>>> import neurokit2 as nk >>> >>> # Get phasic component >>> eda_signal= nk.eda_simulate(duration=30, scr_number=5, drift=0.1, noise=0,

˓→sampling_rate=100) >>> eda_cleaned= nk.eda_clean(eda_signal, sampling_rate=100) >>> eda= nk.eda_phasic(eda_cleaned, sampling_rate=100) >>> eda_phasic= eda["EDA_Phasic"].values >>> >>> # Find peaks >>> _, gamboa2008= nk.eda_peaks(eda_phasic, method="gamboa2008") >>> _, kim2004= nk.eda_peaks(eda_phasic, method="kim2004") >>> _, neurokit= nk.eda_peaks(eda_phasic, method="neurokit") >>> nk.events_plot([gamboa2008["SCR_Peaks"], kim2004["SCR_Peaks"], neurokit["SCR_

˓→Peaks"]], eda_phasic)

References

• Gamboa, H. (2008). Multi-modal behavioral biometrics based on hci and electrophysiology. PhD The- sisUniversidade. • Kim, K. H., Bang, S. W., & Kim, S. R. (2004). Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing, 42(3), 419-427. eda_phasic(eda_signal, sampling_rate=1000, method='highpass') Decompose Electrodermal Activity (EDA) into Phasic and Tonic components. Decompose the Electrodermal Activity (EDA) into two components, namely Phasic and Tonic, using different methods including cvxEDA (Greco, 2016) or Biopac’s Acqknowledge algorithms. Parameters • eda_signal (Union[list, np.array, pd.Series]) – The raw EDA signal. • sampling_rate (int) – The sampling frequency of raw EDA signal (in Hz, i.e., sam- ples/second). Defaults to 1000Hz. • method (str) – The processing pipeline to apply. Can be one of “cvxEDA”, “median”, “smoothmedian”, “highpass”, “biopac”, or “acqknowledge”. Returns DataFrame – DataFrame containing the ‘Tonic’ and the ‘Phasic’ components as columns.

7.5. EDA 153 NeuroKit2, Release 0.0.39

See also: eda_simulate(), eda_clean(), eda_peaks(), eda_process(), eda_plot()

Examples

>>> import neurokit2 as nk >>> >>> # Simulate EDA signal >>> eda_signal= nk.eda_simulate(duration=30, scr_number=5, drift=0.1) >>> >>> # Decompose using different algorithms >>> cvxEDA= nk.eda_phasic(nk.standardize(eda_signal), method='cvxeda') >>> smoothMedian= nk.eda_phasic(nk.standardize(eda_signal), method='smoothmedian

˓→') >>> highpass= nk.eda_phasic(nk.standardize(eda_signal), method='highpass') >>> >>> data= pd.concat([cvxEDA.add_suffix('_cvxEDA'), smoothMedian.add_suffix('_

˓→SmoothMedian'), ... highpass.add_suffix('_Highpass')], axis=1) >>> data["EDA_Raw"]= eda_signal >>> fig= data.plot() >>> fig >>> >>> eda_signal= nk.data("bio_eventrelated_100hz")["EDA"] >>> data= nk.eda_phasic(nk.standardize(eda_signal), sampling_rate=100) >>> data["EDA_Raw"]= eda_signal >>> fig= nk.signal_plot(data, standardize=True) >>> fig

References

• cvxEDA: https://github.com/lciti/cvxEDA. • Greco, A., Valenza, G., & Scilingo, E. P. (2016). Evaluation of CDA and CvxEDA Models. In Ad- vances in Electrodermal Activity Processing with Applications for Mental Health (pp. 35-43). Springer International Publishing. • Greco, A., Valenza, G., Lanata, A., Scilingo, E. P., & Citi, L. (2016). cvxEDA: A convex optimization approach to electrodermal activity processing. IEEE Transactions on Biomedical Engineering, 63(4), 797-804. eda_plot(eda_signals, sampling_rate=None) Visualize electrodermal activity (EDA) data. Parameters • eda_signals (DataFrame) – DataFrame obtained from eda_process(). • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). Defaults to None. Returns fig – Figure representing a plot of the processed EDA signals.

154 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> eda_signal= nk.eda_simulate(duration=30, scr_number=5, drift=0.1, noise=0,

˓→sampling_rate=250) >>> eda_signals, info= nk.eda_process(eda_signal, sampling_rate=250) >>> fig= nk.eda_plot(eda_signals) >>> fig

See also: eda_process() eda_process(eda_signal, sampling_rate=1000, method='neurokit') Process Electrodermal Activity (EDA). Convenience function that automatically processes electrodermal activity (EDA) signal. Parameters • eda_signal (Union[list, np.array, pd.Series]) – The raw EDA signal. • sampling_rate (int) – The sampling frequency of rsp_signal (in Hz, i.e., sam- ples/second). • method (str) – The processing pipeline to apply. Can be one of “biosppy” or “neurokit” (default). Returns • signals (DataFrame) – A DataFrame of same length as eda_signal containing the follow- ing columns: – ”EDA_Raw”: the raw signal. – ”EDA_Clean”: the cleaned signal. – ”EDA_Tonic”: the tonic component of the signal, or the Tonic Skin Conductance Level (SCL). – ”EDA_Phasic”: the phasic component of the signal, or the Phasic Skin Conductance Response (SCR). – ”SCR_Onsets”: the samples at which the onsets of the peaks occur, marked as “1” in a list of zeros. – ”SCR_Peaks”: the samples at which the peaks occur, marked as “1” in a list of zeros. – ”SCR_Height”: the SCR amplitude of the signal including the Tonic component. Note that cumulative effects of close- occurring SCRs might lead to an underestima- tion of the amplitude. – ”SCR_Amplitude”: the SCR amplitude of the signal excluding the Tonic component. – ”SCR_RiseTime”: the time taken for SCR onset to reach peak amplitude within the SCR. – ”SCR_Recovery”: the samples at which SCR peaks recover (decline) to half ampli- tude, marked as “1” in a list of zeros. • info (dict) – A dictionary containing the information of each SCR peak (see eda_findpeaks()).

7.5. EDA 155 NeuroKit2, Release 0.0.39

See also: eda_simulate(), eda_clean(), eda_phasic(), eda_findpeaks(), eda_plot()

Examples

>>> import neurokit2 as nk >>> >>> eda_signal= nk.eda_simulate(duration=30, scr_number=5, drift=0.1, noise=0) >>> signals, info= nk.eda_process(eda_signal, sampling_rate=1000) >>> fig= nk.eda_plot(signals) >>> fig eda_simulate(duration=10, length=None, sampling_rate=1000, noise=0.01, scr_number=1, drift=- 0.01, random_state=None) Simulate Electrodermal Activity (EDA) signal. Generate an artificial (synthetic) EDA signal of a given duration and sampling rate. Parameters • duration (int) – Desired recording length in seconds. • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). Defaults to 1000Hz. • length (int) – The desired length of the signal (in samples). Defaults to None. • noise (float) – Noise level (amplitude of the laplace noise). Defaults to 0.01. • scr_number (int) – Desired number of skin conductance responses (SCRs), i.e., peaks. Defaults to 1. • drift (float or list) – The slope of a linear drift of the signal. Defaults to -0.01. • random_state (int) – Seed for the random number generator. Defaults to None. Returns array – Vector containing the EDA signal.

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> eda= nk.eda_simulate(duration=10, scr_number=3) >>> fig= nk.signal_plot(eda) >>> fig

See also: ecg_simulate(), rsp_simulate(), emg_simulate(), ppg_simulate()

156 Chapter 7. Functions NeuroKit2, Release 0.0.39

References

• Bach, D. R., Flandin, G., Friston, K. J., & Dolan, R. J. (2010). Modelling event-related skin conductance responses. International Journal of Psychophysiology, 75(3), 349-356.

7.6 EMG

Submodule for NeuroKit. emg_activation(emg_amplitude=None, emg_cleaned=None, sampling_rate=1000, method='threshold', threshold='default', duration_min='default', **kwargs) Detects onset in EMG signal based on the amplitude threshold. Parameters • emg_amplitude (array) – At least one EMG-related signal. Either the amplitude of the EMG signal, obtained from emg_amplitude() for methods like ‘threshold’ or ‘mixture’), and / or the cleaned EMG signal (for methods like ‘pelt’). • emg_cleaned (array) – At least one EMG-related signal. Either the amplitude of the EMG signal, obtained from emg_amplitude() for methods like ‘threshold’ or ‘mix- ture’), and / or the cleaned EMG signal (for methods like ‘pelt’). • sampling_rate (int) – The sampling frequency of emg_signal (in Hz, i.e., sam- ples/second). • method (str) – The algorithm used to discriminate between activity and baseline. Can be one of ‘mixture’ (default) or ‘threshold’. If ‘mixture’, will use a Gaussian Mixture Model to categorize between the two states. If ‘threshold’, will consider as activated all points which amplitude is superior to the threshold. • threshold (float) – If method is ‘mixture’, then it corresponds to the minimum proba- bility required to be considered as activated (default to 0.33). If method is ‘threshold’, then it corresponds to the minimum amplitude to detect as onset. Defaults to one tenth of the standard deviation of emg_amplitude. • duration_min (float) – The minimum duration of a period of activity or non-activity in seconds. If ‘default’, will be set to 0.05 (50 ms). • kwargs (optional) – Other arguments. Returns • info (dict) – A dictionary containing additional information, in this case the samples at which the onsets, offsets, and periods of activations of the EMG signal occur, accessible with the key “EMG_Onsets”, “EMG_Offsets”, and “EMG_Activity” respectively. • activity_signal (DataFrame) – A DataFrame of same length as the input signal in which occurences of onsets, offsets, and activity (above the threshold) of the EMG signal are marked as “1” in lists of zeros with the same length as emg_amplitude. Accessible with the keys “EMG_Onsets”, “EMG_Offsets”, and “EMG_Activity” respectively. See also: emg_simulate(), emg_clean(), emg_amplitude(), emg_process(), emg_plot()

7.6. EMG 157 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Simulate signal and obtain amplitude >>> emg= nk.emg_simulate(duration=10, burst_number=3) >>> emg_cleaned= nk.emg_clean(emg) >>> emg_amplitude= nk.emg_amplitude(emg_cleaned) >>> >>> # Threshold method >>> activity, info= nk.emg_activation(emg_amplitude=emg_amplitude, method=

˓→"threshold") >>> fig= nk.events_plot([info["EMG_Offsets"], info["EMG_Onsets"]], emg_cleaned) >>> fig >>> >>> # Threshold method >>> activity, info= nk.emg_activation(emg_cleaned=emg_cleaned, method="pelt") >>> nk.signal_plot([emg_cleaned, activity["EMG_Activity"]]) >>> fig

References

• BioSPPy: https://github.com/PIA-Group/BioSPPy/blob/master/biosppy/signals/emg.py emg_amplitude(emg_cleaned) Compute electromyography (EMG) amplitude. Compute electromyography amplitude given the cleaned respiration signal, done by calculating the linear enve- lope of the signal. Parameters emg_cleaned (Union[list, np.array, pd.Series]) – The cleaned electromyography channel as returned by emg_clean(). Returns array – A vector containing the electromyography amplitude. See also: emg_clean(), emg_rate(), emg_process(), emg_plot()

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> emg= nk.emg_simulate(duration=10, sampling_rate=1000, burst_number=3) >>> cleaned= nk.emg_clean(emg, sampling_rate=1000) >>> >>> amplitude= nk.emg_amplitude(cleaned) >>> fig= pd.DataFrame({"EMG": emg,"Amplitude": amplitude}).plot(subplots=True) >>> fig emg_analyze(data, sampling_rate=1000, method='auto') Performs EMG analysis on either epochs (event-related analysis) or on longer periods of data such as resting- state data. Parameters

158 Chapter 7. Functions NeuroKit2, Release 0.0.39

• data (Union[dict, pd.DataFrame]) – A dictionary of epochs, containing one DataFrame per epoch, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually obtained via epochs_to_df(). Can also take a DataFrame of processed signals from a longer period of data, typically generated by emg_process() or bio_process(). Can also take a dict containing sets of separate periods of data. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Defaults to 1000Hz. • method (str) – Can be one of ‘event-related’ for event-related analysis on epochs, or ‘interval-related’ for analysis on longer periods of data. Defaults to ‘auto’ where the right method will be chosen based on the mean duration of the data (‘event-related’ for duration under 10s). Returns DataFrame – A dataframe containing the analyzed EMG features. If event-related anal- ysis is conducted, each epoch is indicated by the Label column. See emg_eventrelated() and emg_intervalrelated() docstrings for details. See also: bio_process(), emg_process(), epochs_create(), emg_eventrelated(), emg_intervalrelated()

Examples

>>> import neurokit2 as nk >>> import pandas as pd

>>> # Example with simulated data >>> emg= nk.emg_simulate(duration=20, sampling_rate=1000, burst_number=3) >>> emg_signals, info= nk.emg_process(emg, sampling_rate=1000) >>> epochs= nk.epochs_create(emg_signals, events=[3000, 6000, 9000], sampling_

˓→rate=1000, ... epochs_start=-0.1, epochs_end=1.9) >>> >>> # Event-related analysis >>> nk.emg_analyze(epochs, method="event-related") >>> >>> # Interval-related analysis >>> nk.emg_analyze(emg_signals, method="interval-related") emg_clean(emg_signal, sampling_rate=1000) Preprocess an electromyography (emg) signal. Clean an EMG signal using a set of parameters, such as: in `BioSPPy >`_: fourth order 100 Hz highpass Butterworth filter followed by a constant detrending. Parameters • emg_signal (Union[list, np.array, pd.Series]) – The raw EMG channel. • sampling_rate (int) – The sampling frequency of emg_signal (in Hz, i.e., sam- ples/second). Defaults to 1000. Returns array – Vector containing the cleaned EMG signal. See also: emg_amplitude(), emg_process(), emg_plot()

7.6. EMG 159 NeuroKit2, Release 0.0.39

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> emg= nk.emg_simulate(duration=10, sampling_rate=1000) >>> signals= pd.DataFrame({"EMG_Raw": emg,"EMG_Cleaned":nk.emg_clean(emg,

˓→sampling_rate=1000)}) >>> fig= signals.plot() >>> fig emg_eventrelated(epochs, silent=False) Performs event-related EMG analysis on epochs. Parameters • epochs (Union[dict, pd.DataFrame]) – A dict containing one DataFrame per event/trial, usually obtained via epochs_create(), or a DataFrame containing all epochs, usually ob- tained via epochs_to_df(). • silent (bool) – If True, silence possible warnings. Returns DataFrame – A dataframe containing the analyzed EMG features for each epoch, with each epoch indicated by the Label column (if not present, by the Index column). The analyzed features consist of the following: • ”EMG_Activation”: indication of whether there is muscular activation follow- ing the onset of the event (1 if present, 0 if absent) and if so, its corresponding amplitude features and the number of activations in each epoch. If there is no activation, nans are displayed for the below features. - “EMG_Amplitude_Mean”: the mean amplitude of the activity. - “EMG_Amplitude_Max”: the maximum amplitude of the activity. - “EMG_Amplitude_Max_Time”: the time of maximum amplitude. - “EMG_Bursts”: the number of activations, or bursts of activity, within each epoch. See also: emg_simulate(), emg_process(), events_find(), epochs_create()

Examples

>>> import neurokit2 as nk >>> >>> # Example with simulated data >>> emg= nk.emg_simulate(duration=20, sampling_rate=1000, burst_number=3) >>> emg_signals, info= nk.emg_process(emg, sampling_rate=1000) >>> epochs= nk.epochs_create(emg_signals, events=[3000, 6000, 9000], sampling_

˓→rate=1000, ... epochs_start=-0.1,epochs_end=1.9) >>> nk.emg_eventrelated(epochs) emg_intervalrelated(data) Performs EMG analysis on longer periods of data (typically > 10 seconds), such as resting-state data.

160 Chapter 7. Functions NeuroKit2, Release 0.0.39

Parameters data (Union[dict, pd.DataFrame]) – A DataFrame containing the different processed signal(s) as different columns, typically generated by emg_process() or bio_process(). Can also take a dict containing sets of separately processed DataFrames. Returns DataFrame – A dataframe containing the analyzed EMG features. The analyzed features consist of the following: - “EMG_Activation_N”: the number of bursts of muscular activity. - “EMG_Amplitude_Mean”: the mean amplitude of the muscular activity. See also: bio_process(), emg_eventrelated()

Examples

>>> import neurokit2 as nk >>> >>> # Example with simulated data >>> emg= nk.emg_simulate(duration=40, sampling_rate=1000, burst_number=3) >>> emg_signals, info= nk.emg_process(emg, sampling_rate=1000) >>> >>> # Single dataframe is passed >>> nk.emg_intervalrelated(emg_signals) >>> >>> epochs= nk.epochs_create(emg_signals, events=[0, 20000], sampling_rate=1000,

˓→epochs_end=20) >>> nk.emg_intervalrelated(epochs) emg_plot(emg_signals, sampling_rate=None) Visualize electromyography (EMG) data. Parameters • emg_signals (DataFrame) – DataFrame obtained from emg_process(). • sampling_rate (int) – The sampling frequency of the EMG (in Hz, i.e., samples/second). Needs to be supplied if the data should be plotted over time in seconds. Otherwise the data is plotted over samples. Defaults to None. Returns fig – Figure representing a plot of the processed emg signals.

Examples

>>> import neurokit2 as nk >>> >>> emg= nk.emg_simulate(duration=10, sampling_rate=1000, burst_number=3) >>> emg_signals,_= nk.emg_process(emg, sampling_rate=1000) >>> fig= nk.emg_plot(emg_signals) >>> fig

See also: ecg_process() emg_process(emg_signal, sampling_rate=1000) Process a electromyography (EMG) signal. Convenience function that automatically processes an electromyography signal. Parameters

7.6. EMG 161 NeuroKit2, Release 0.0.39

• emg_signal (Union[list, np.array, pd.Series]) – The raw electromyography channel. • sampling_rate (int) – The sampling frequency of emg_signal (in Hz, i.e., sam- ples/second). Returns • signals (DataFrame) – A DataFrame of same length as emg_signal containing the fol- lowing columns: - “EMG_Raw”: the raw signal. - “EMG_Clean”: the cleaned sig- nal. - “EMG_Amplitude”: the signal amplitude, or the activation level of the signal. - “EMG_Activity”: the activity of the signal for which amplitude exceeds the threshold specified, marked as “1” in a list of zeros. - “EMG_Onsets”: the onsets of the amplitude, marked as “1” in a list of zeros. - “EMG_Offsets”: the offsets of the amplitude, marked as “1” in a list of zeros. • info (dict) – A dictionary containing the information of each amplitude onset, offset, and peak activity (see emg_activation()). See also: emg_clean(), emg_amplitude(), emg_plot()

Examples

>>> import neurokit2 as nk >>> >>> emg= nk.emg_simulate(duration=10, sampling_rate=1000, burst_number=3) >>> signals, info= nk.emg_process(emg, sampling_rate=1000) >>> fig= nk.emg_plot(signals) >>> fig emg_simulate(duration=10, length=None, sampling_rate=1000, noise=0.01, burst_number=1, burst_duration=1.0, random_state=42) Simulate an EMG signal. Generate an artificial (synthetic) EMG signal of a given duration and sampling rate. Parameters • duration (int) – Desired recording length in seconds. • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). • length (int) – The desired length of the signal (in samples). • noise (float) – Noise level (gaussian noise). • burst_number (int) – Desired number of bursts of activity (active muscle periods). • burst_duration (float or list) – Duration of the bursts. Can be a float (each burst will have the same duration) or a list of durations for each bursts. • random_state (int) – Seed for the random number generator. Returns array – Vector containing the EMG signal.

162 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> emg= nk.emg_simulate(duration=10, burst_number=3) >>> fig= nk.signal_plot(emg) >>> fig

See also: ecg_simulate(), rsp_simulate(), eda_simulate(), ppg_simulate()

References

This function is based on this script.

7.7 EEG

Submodule for NeuroKit. mne_channel_add(raw, channel, channel_type=None, channel_name=None, sync_index_raw=0, sync_index_channel=0) Add channel as array to MNE. Add a channel to a mne’s Raw m/eeg file. It will basically synchronize the channel to the eeg data following a particular index and add it. Parameters • raw (mne.io.Raw) – Raw EEG data from MNE. • channel (list or array) – The signal to be added. • channel_type (str) – Channel type. Currently supported fields are ‘ecg’, ‘bio’, ‘stim’, ‘eog’, ‘misc’, ‘seeg’, ‘ecog’, ‘mag’, ‘eeg’, ‘ref_meg’, ‘grad’, ‘emg’, ‘hbr’ or ‘hbo’. • channel_name (str) – Desired channel name. • sync_index_raw (int or list) – An index (e.g., the onset of the same event marked in the same signal), in the raw data, by which to align the two inputs. This can be used in case the EEG data and the channel to add do not have the same onsets and must be aligned through some common event. • sync_index_channel (int or list) – An index (e.g., the onset of the same event marked in the same signal), in the channel to add, by which to align the two inputs. This can be used in case the EEG data and the channel to add do not have the same onsets and must be aligned through some common event. Returns mne.io.Raw – Raw data in FIF format.

7.7. EEG 163 NeuroKit2, Release 0.0.39

Example

>>> import neurokit2 as nk >>> import mne >>> >>> # Let's say that the 42nd sample point in the EEG correspond to the 333rd

˓→point in the ECG >>> event_index_in_eeg= 42 >>> event_index_in_ecg= 333 >>> >>> raw= mne.io.read_raw_fif(mne.datasets.sample.data_path()+'/MEG/sample/

˓→sample_audvis_raw.fif', ... preload=True) >>> ecg= nk.ecg_simulate(length=170000) >>> >>> raw= nk.mne_channel_add(raw, ecg, sync_index_raw=event_index_in_eeg, ... sync_index_channel=event_index_in_ecg, channel_type=

˓→"ecg") mne_channel_extract(raw, what, name=None) Channel array extraction from MNE. Select one or several channels by name and returns them in a dataframe. Parameters • raw (mne.io.Raw) – Raw EEG data. • what (str or list) – Can be ‘MEG’, which will extract all MEG channels, ‘EEG’, which will extract all EEG channels, or ‘EOG’, which will extract all EOG channels (that is, if channel names are named with prefixes of their type e.g., ‘EEG 001’ etc. or ‘EOG 061’). Provide exact a single or a list of channel’s name(s) if not (e.g., [‘124’, ‘125’]). • name (str or list) – Useful only when extracting one channel. Can also take a list of names for renaming multiple channels, Otherwise, defaults to None. Returns DataFrame – A DataFrame or Series containing the channel(s).

Example

>>> import neurokit2 as nk >>> import mne >>> >>> raw= mne.io.read_raw_fif(mne.datasets.sample.data_path()+ ... '/MEG/sample/sample_audvis_raw.fif', preload=True) >>> >>> raw_channel= nk.mne_channel_extract(raw, what=["EEG 060","EEG 055"], name=[

˓→'060','055']) >>> eeg_channels= nk.mne_channel_extract(raw,"EEG") >>> eog_channels= nk.mne_channel_extract(raw, what='EOG', name='EOG')

164 Chapter 7. Functions NeuroKit2, Release 0.0.39

7.8 Signal Processing

Submodule for NeuroKit. signal_autocor(signal, lag=None, normalize=True) Auto-correlation of a 1-dimensional sequences. Parameters • signal (Union[list, np.array, pd.Series]) – Vector of values. • normalize (bool) – Normalize the autocorrelation output. • lag (int) – Time lag. If specified, one value of autocorrelation between signal with its lag self will be returned. Returns r – The cross-correlation of the signal with itself at different time lags. Minimum time lag is 0, maximum time lag is the length of the signal. Or a correlation value at a specific lag if lag is not None.

Examples

>>> import neurokit2 as nk >>> >>> x=[1,2,3,4,5] >>> autocor= nk.signal_autocor(x) >>> autocor signal_binarize(signal, method='threshold', threshold='auto') Binarize a continuous signal. Convert a continuous signal into zeros and ones depending on a given threshold. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • method (str) – The algorithm used to discriminate between the two states. Can be one of ‘mixture’ (default) or ‘threshold’. If ‘mixture’, will use a Gaussian Mixture Model to categorize between the two states. If ‘threshold’, will consider as activated all points which value is superior to the threshold. • threshold (float) – If method is ‘mixture’, then it corresponds to the minimum probability required to be considered as activated (if ‘auto’, then 0.5). If method is ‘threshold’, then it corresponds to the minimum amplitude to detect as onset. If “auto”, takes the value between the max and the min. Returns list – A list or array depending on the type passed.

7.8. Signal Processing 165 NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= np.cos(np.linspace(start=0, stop=20, num=1000)) >>> binary= nk.signal_binarize(signal) >>> fig= pd.DataFrame({"Raw": signal,"Binary": binary}).plot() >>> fig signal_changepoints(signal, change='meanvar', penalty=None, show=False) Change Point Detection. Only the PELT method is implemented for now. Parameters • signal (Union[list, np.array, pd.Series]) – Vector of values. • change (str) – Can be one of “meanvar” (default), “mean” or “var”. • penalty (float) – The algorithm penalty. Default to np.log(len(signal)). • show (bool) – Defaults to False. Returns • Array – Values indicating the samples at which the changepoints occur. • Fig – Figure of plot of signal with markers of changepoints.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.emg_simulate(burst_number=3) >>> fig= nk.signal_changepoints(signal, change="var", show=True) >>> fig

References

• Killick, R., Fearnhead, P., & Eckley, I. A. (2012). Optimal detection of changepoints with a linear

computational cost. Journal of the American Statistical Association, 107(500), 1590-1598. signal_decompose(signal, method='emd', n_components=None, **kwargs) Decompose a signal. Signal decomposition into different sources using different methods, such as Empirical Mode Decomposition (EMD) or Singular spectrum analysis (SSA)-based signal separation method. The extracted components can then be recombined into meaningful sources using signal_recompose(). Parameters • signal (Union[list, np.array, pd.Series]) – Vector of values. • method (str) – The decomposition method. Can be one of ‘emd’ or ‘ssa’.

166 Chapter 7. Functions NeuroKit2, Release 0.0.39

• n_components (int) – Number of components to extract. Only used for ‘ssa’ method. If None, will default to 50. • **kwargs – Other arguments passed to other functions. Returns Array – Components of the decomposed signal. See also: signal_recompose()

Examples

>>> import neurokit2 as nk >>> >>> # Create complex signal >>> signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) # High freq >>> signal+=3 * nk.signal_simulate(duration=10, frequency=3, noise=0.01)# ˓→Higher freq >>> signal+=3 * np.linspace(0,2, len(signal)) # Add baseline and trend >>> signal+=2 * nk.signal_simulate(duration=10, frequency=0.1, noise=0) >>> >>> nk.signal_plot(signal) >>> >>> # EMD method >>> components= nk.signal_decompose(signal, method="emd") >>> fig= nk.signal_plot(components) # Visualize components >>> fig >>> >>> # SSA method >>> components= nk.signal_decompose(signal, method="ssa") >>> fig= nk.signal_plot(components) # Visualize components >>> fig signal_detrend(signal, method='polynomial', order=1, regularization=500, alpha=0.75, window=1.5, stepsize=0.02) Polynomial detrending of signal. Apply a baseline (order = 0), linear (order = 1), or polynomial (order > 1) detrending to the signal (i.e., removing a general trend). One can also use other methods, such as smoothness priors approach described by Tarvainen (2002) or LOESS regression, but these scale badly for long signals. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • method (str) – Can be one of ‘polynomial’ (default; traditional detrending of a given order) or ‘tarvainen2002’ to use the smoothness priors approach described by Tarvainen (2002) (mostly used in HRV analyses as a lowpass filter to remove complex trends), ‘loess’ for LOESS smoothing trend removal or ‘locreg’ for local linear regression (the ‘runline’ algorithm from chronux). • order (int) – Only used if method is ‘polynomial’. The order of the polynomial. 0, 1 or > 1 for a baseline (‘constant detrend’, i.e., remove only the mean), linear (remove the linear trend) or polynomial detrending, respectively. Can also be ‘auto’, it which case it will attempt to find the optimal order to minimize the RMSE. • regularization (int) – Only used if method=’tarvainen2002’. The regularization param- eter (default to 500).

7.8. Signal Processing 167 NeuroKit2, Release 0.0.39

• alpha (float) – Only used if method is ‘loess’. The parameter which controls the degree of smoothing. • window (float) – Only used if method is ‘locreg’. The detrending ‘window’ should cor- respond to the desired low frequency band to remove multiplied by the sampling rate (for instance, 1.5*1000 will remove frequencies below 1.5Hz for a signal sampled at 1000Hz). • stepsize (float) – Only used if method is ‘locreg’. Simialrly to ‘window’, ‘stepsize’ should also be multiplied by the sampling rate. Returns array – Vector containing the detrended signal. See also: signal_filter(), fit_loess()

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> import matplotlib.pyplot as plt >>> >>> # Simulate signal with low and high frequency >>> signal= nk.signal_simulate(frequency=[0.1,2], amplitude=[2, 0.5], sampling_

˓→rate=100) >>> signal= signal+(3+ np.linspace(0,6, num=len(signal))) # Add baseline

˓→and linear trend >>> >>> # Apply detrending algorithms >>> baseline= nk.signal_detrend(signal, order=0) # Constant detrend (removes

˓→the mean) >>> linear= nk.signal_detrend(signal, order=1) # Linear detrend >>> quadratic= nk.signal_detrend(signal, order=2) # Quadratic detrend >>> cubic= nk.signal_detrend(signal, order=3) # Cubic detrend >>> poly10= nk.signal_detrend(signal, order=10) # Linear detrend (10th order) >>> tarvainen= nk.signal_detrend(signal, method='tarvainen2002') # Tarvainen

˓→(2002) method >>> loess= nk.signal_detrend(signal, method='loess') # LOESS detrend (smooth

˓→removal) >>> locreg= nk.signal_detrend(signal, method='locreg', ... window=1.5*100, stepsize=0.02*100) # Local ˓→regression (100Hz) >>> >>> # Visualize different methods >>> axes= pd.DataFrame({"Original signal": signal, ... "Baseline": baseline, ... "Linear": linear, ... "Quadratic": quadratic, ... "Cubic": cubic, ... "Polynomial (10th)": poly10, ... "Tarvainen": tarvainen, ... "LOESS": loess, ... "Local Regression": locreg}).plot(subplots=True) >>> # Plot horizontal lines to better visualize the detrending >>> for subplot in axes: ... subplot.axhline(y=0, color='k', linestyle='--')

168 Chapter 7. Functions NeuroKit2, Release 0.0.39

References

• `Tarvainen, M. P., Ranta-Aho, P. O., & Karjalainen, P. A. (2002). An advanced detrending method

with application to HRV analysis. IEEE Transactions on Biomedical Engineering, 49(2), 172-175. `_ signal_distort(signal, sampling_rate=1000, noise_shape='laplace', noise_amplitude=0, noise_frequency=100, powerline_amplitude=0, powerline_frequency=50, arti- facts_amplitude=0, artifacts_frequency=100, artifacts_number=5, linear_drift=False, random_state=None, silent=False) Signal distortion. Add noise of a given frequency, amplitude and shape to a signal. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). • noise_shape (str) – The shape of the noise. Can be one of ‘laplace’ (default) or ‘gaus- sian’. • noise_amplitude (float) – The amplitude of the noise (the scale of the random function, relative to the standard deviation of the signal). • noise_frequency (float) – The frequency of the noise (in Hz, i.e., samples/second). • powerline_amplitude (float) – The amplitude of the powerline noise (relative to the stan- dard deviation of the signal). • powerline_frequency (float) – The frequency of the powerline noise (in Hz, i.e., sam- ples/second). • artifacts_amplitude (float) – The amplitude of the artifacts (relative to the standard de- viation of the signal). • artifacts_frequency (int) – The frequency of the artifacts (in Hz, i.e., samples/second). • artifacts_number (int) – The number of artifact bursts. The bursts have a random dura- tion between 1 and 10% of the signal duration. • linear_drift (bool) – Whether or not to add linear drift to the signal. • random_state (int) – Seed for the random number generator. Keep it fixed for repro- ducible results. • silent (bool) – Whether or not to display warning messages. Returns array – Vector containing the distorted signal.

7.8. Signal Processing 169 NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, frequency=0.5) >>> >>> # Noise >>> noise= pd.DataFrame({"Freq100": nk.signal_distort(signal, noise_

˓→frequency=200), ... "Freq50": nk.signal_distort(signal, noise_frequency=50), ... "Freq10": nk.signal_distort(signal, noise_frequency=10), ... "Freq5": nk.signal_distort(signal, noise_frequency=5), ... "Raw": signal}).plot() >>> noise >>> >>> # Artifacts >>> artifacts= pd.DataFrame({"1Hz": nk.signal_distort(signal, noise_amplitude=0, ... artifacts_frequency=1,

˓→artifacts_amplitude=0.5), ... "5Hz": nk.signal_distort(signal, noise_amplitude=0, ... artifacts_frequency=5,

˓→artifacts_amplitude=0.2), ... "Raw": signal}).plot() >>> artifacts signal_filter(signal, sampling_rate=1000, lowcut=None, highcut=None, method='butterworth', or- der=2, window_size='default', powerline=50) Filter a signal using ‘butterworth’, ‘fir’ or ‘savgol’ filters. Apply a lowpass (if ‘highcut’ frequency is provided), highpass (if ‘lowcut’ frequency is provided) or bandpass (if both are provided) filter to the signal. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. or “bandstop”. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). • lowcut (float) – Lower cutoff frequency in Hz. The default is None. • highcut (float) – Upper cutoff frequency in Hz. The default is None. • method (str) – Can be one of ‘butterworth’, ‘fir’, ‘bessel’ or ‘savgol’. Note that for Butterworth, the function uses the SOS method from scipy.signal.sosfiltfilt, recommended for general purpose filtering. One can also specify “butterworth_ba’ for a more traditional and legacy method (often implemented in other software). • order (int) – Only used if method is ‘butterworth’ or ‘savgol’. Order of the filter (default is 2). • window_size (int) – Only used if method is ‘savgol’. The length of the filter window (i.e. the number of coefficients). Must be an odd integer. If ‘default’, will be set to the sampling rate divided by 10 (101 if the sampling rate is 1000 Hz). • powerline (int) – Only used if method is ‘powerline’. The powerline frequency (normally 50 Hz or 60 Hz). See also:

170 Chapter 7. Functions NeuroKit2, Release 0.0.39

signal_detrend(), signal_psd()

Returns array – Vector containing the filtered signal.

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, frequency=0.5) # Low freq >>> signal+= nk.signal_simulate(duration=10, frequency=5) # High freq >>> >>> # Lowpass >>> fig1= pd.DataFrame({"Raw": signal, ... "Butter_2": nk.signal_filter(signal, highcut=3, method=

˓→'butterworth', order=2), ... "Butter_2_BA": nk.signal_filter(signal, highcut=3,

˓→method='butterworth_ba', order=2), ... "Butter_5": nk.signal_filter(signal, highcut=3, method=

˓→'butterworth', order=5), ... "Butter_5_BA": nk.signal_filter(signal, highcut=3,

˓→method='butterworth_ba', order=5), ... "Bessel_2": nk.signal_filter(signal, highcut=3, method=

˓→'bessel', order=2), ... "Bessel_5": nk.signal_filter(signal, highcut=3, method=

˓→'bessel', order=5), ... "FIR": nk.signal_filter(signal, highcut=3, method='fir')}

˓→).plot(subplots=True) >>> fig1

>>> # Highpass >>> fig2= pd.DataFrame({"Raw": signal, ... "Butter_2": nk.signal_filter(signal, lowcut=2, method=

˓→'butterworth', order=2), ... "Butter_2_ba": nk.signal_filter(signal, lowcut=2, method=

˓→'butterworth_ba', order=2), ... "Butter_5": nk.signal_filter(signal, lowcut=2, method=

˓→'butterworth', order=5), ... "Butter_5_BA": nk.signal_filter(signal, lowcut=2, method=

˓→'butterworth_ba', order=5), ... "Bessel_2": nk.signal_filter(signal, lowcut=2, method=

˓→'bessel', order=2), ... "Bessel_5": nk.signal_filter(signal, lowcut=2, method=

˓→'bessel', order=5), ... "FIR": nk.signal_filter(signal, lowcut=2, method='fir')}

˓→).plot(subplots=True) >>> fig2

>>> # Bandpass in real-life scenarios >>> original= nk.rsp_simulate(duration=30, method="breathmetrics", noise=0) >>> signal= nk.signal_distort(original, noise_frequency=[0.1,2, 10, 100], noise_

˓→amplitude=1, ... powerline_amplitude=1) >>> >>> # Bandpass between 10 and 30 breaths per minute (respiratory rate range) (continues on next page)

7.8. Signal Processing 171 NeuroKit2, Release 0.0.39

(continued from previous page) >>> fig3= pd.DataFrame({"Raw": signal, ... "Butter_2": nk.signal_filter(signal, lowcut=10/60,

˓→highcut=30/60, ... method='butterworth',

˓→order=2), ... "Butter_2_BA": nk.signal_filter(signal, lowcut=10/60,

˓→highcut=30/60, ... method='butterworth_ba',

˓→order=2), ... "Butter_5": nk.signal_filter(signal, lowcut=10/60,

˓→highcut=30/60, ... method='butterworth',

˓→order=5), ... "Butter_5_BA": nk.signal_filter(signal, lowcut=10/60,

˓→highcut=30/60, ... method='butterworth_ba',

˓→order=5), ... "Bessel_2": nk.signal_filter(signal, lowcut=10/60,

˓→highcut=30/60, ... method='bessel', order=2), ... "Bessel_5": nk.signal_filter(signal, lowcut=10/60,

˓→highcut=30/60, ... method='bessel', order=5), ... "FIR": nk.signal_filter(signal, lowcut=10/60, highcut=30/

˓→60, ... method='fir'), ... "Savgol": nk.signal_filter(signal, method='savgol')}).

˓→plot(subplots=True) >>> fig3 signal_findpeaks(signal, height_min=None, height_max=None, relative_height_min=None, rel- ative_height_max=None, relative_mean=True, relative_median=False, rela- tive_max=False) Find peaks in a signal. Locate peaks (local maxima) in a signal and their related characteristics, such as height (prominence), width and distance with other peaks. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • height_min (float) – The minimum height (i.e., amplitude in terms of absolute values). For example,``height_min=20`` will remove all peaks which height is smaller or equal to 20 (in the provided signal’s values). • height_max (float) – The maximum height (i.e., amplitude in terms of absolute values). • relative_height_min (float) – The minimum height (i.e., amplitude) relative to the sam- ple (see below). For example, relative_height_min=-2.96 will remove all peaks which height lies below 2.96 standard deviations from the mean of the heights. • relative_height_max (float) – The maximum height (i.e., amplitude) relative to the sam- ple (see below). • relative_mean (bool) – If a relative threshold is specified, how should it be computed (i.e., relative to what?). relative_mean=True will use Z-scores.

172 Chapter 7. Functions NeuroKit2, Release 0.0.39

• relative_median (bool) – If a relative threshold is specified, how should it be computed (i.e., relative to what?). Relative to median uses a more robust form of standardization (see standardize()). • relative_max (bool) – If a relative threshold is specified, how should it be computed (i.e., relative to what?). Reelative to max will consider the maximum height as the reference. Returns dict – Returns a dict itself containing 5 arrays: - ‘Peaks’ contains the peaks indices (as relative to the given signal). For instance, the value 3 means that the third data point of the signal is a peak. - ‘Distance’ contains, for each peak, the closest distance with another peak. Note that these values will be recomputed after filtering to match the selected peaks. - ‘Height’ contains the prominence of each peak. See scipy.signal.peak_prominences().- ‘Width’ contains the width of each peak. See scipy.signal.peak_widths(). - ‘Onset’ contains the onset, start (or left trough), of each peak. - ‘Offset’ contains the offset, end (or right trough), of each peak.

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> import scipy.misc >>> >>> signal= nk.signal_simulate(duration=5) >>> info= nk.signal_findpeaks(signal) >>> fig1= nk.events_plot([info["Onsets"], info["Peaks"]], signal) >>> fig1 >>> >>> signal= nk.signal_distort(signal) >>> info= nk.signal_findpeaks(signal, height_min=1) >>> fig2= nk.events_plot(info["Peaks"], signal) >>> fig2 >>> >>> # Filter peaks >>> ecg= scipy.misc.electrocardiogram() >>> signal= ecg[0:1000] >>> info1= nk.signal_findpeaks(signal, relative_height_min=0) >>> info2= nk.signal_findpeaks(signal, relative_height_min=1) >>> fig3= nk.events_plot([info1["Peaks"], info2["Peaks"]], signal) >>> fig3

See also: scipy.signal.find_peaks(), scipy.signal.peak_widths(), peak_prominences. signal.peak_widths(), eda_findpeaks(), ecg_findpeaks(), rsp_findpeaks(), signal_fixpeaks() signal_fixpeaks(peaks, sampling_rate=1000, iterative=True, show=False, interval_min=None, in- terval_max=None, relative_interval_min=None, relative_interval_max=None, ro- bust=False, method='Kubios') Correct erroneous peak placements. Identify and correct erroneous peak placements based on outliers in peak-to-peak differences (period). Parameters • peaks (list or array or DataFrame or Series or dict) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with signal_findpeaks().

7.8. Signal Processing 173 NeuroKit2, Release 0.0.39

If a DataFrame is passed in, it is assumed to be obtained with ecg_findpeaks() or ppg_findpeaks() and to be of the same length as the input signal. • sampling_rate (int) – The sampling frequency of the signal that contains the peaks (in Hz, i.e., samples/second). • iterative (bool) – Whether or not to apply the artifact correction repeatedly (results in superior artifact correction). • show (bool) – Whether or not to visualize artifacts and artifact thresholds. • interval_min (float) – The minimum interval between the peaks. • interval_max (float) – The maximum interval between the peaks. • relative_interval_min (float) – The minimum interval between the peaks as relative to the sample (expressed in standard deviation from the mean). • relative_interval_max (float) – The maximum interval between the peaks as relative to the sample (expressed in standard deviation from the mean). • robust (bool) – Use a robust method of standardization (see standardize()) for the relative thresholds. • method (str) – Either “Kubios” or “Neurokit”. “Kubios” uses the artifact detection and correction described in Lipponen, J. A., & Tarvainen, M. P. (2019). Note that “Kubios” is only meant for peaks in ECG or PPG. “neurokit” can be used with peaks in ECG, PPG, or respiratory data. Returns • peaks_clean (array) – The corrected peak locations. • artifacts (dict) – Only if method=”Kubios”. A dictionary containing the indices of arti- facts, accessible with the keys “ectopic”, “missed”, “extra”, and “longshort”. See also: signal_findpeaks(), ecg_findpeaks(), ecg_peaks(), ppg_findpeaks(), ppg_peaks()

Examples

>>> import neurokit2 as nk >>> import numpy as np >>> import matplotlib.pyplot as plt >>> >>> # Kubios >>> ecg= nk.ecg_simulate(duration=240, noise=0.25, heart_rate=70, random_

˓→state=42) >>> rpeaks_uncorrected= nk.ecg_findpeaks(ecg) >>> artifacts, rpeaks_corrected= nk.signal_fixpeaks(rpeaks_uncorrected,

˓→iterative=True, ... show=True, method="Kubios") >>> rate_corrected= nk.signal_rate(rpeaks_corrected, desired_length=len(ecg)) >>> rate_uncorrected= nk.signal_rate(rpeaks_uncorrected, desired_length=len(ecg)) >>> >>> fig, ax= plt.subplots() >>> ax.plot(rate_uncorrected, label="heart rate without artifact correction") >>> ax.plot(rate_corrected, label="heart rate with artifact correction") >>> ax.legend(loc="upper right") >>> (continues on next page)

174 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) >>> # NeuroKit >>> signal= nk.signal_simulate(duration=4, sampling_rate=1000, frequency=1) >>> peaks_true= nk.signal_findpeaks(signal)["Peaks"] >>> peaks= np.delete(peaks_true,[1]) # create gaps >>> >>> signal= nk.signal_simulate(duration=20, sampling_rate=1000, frequency=1) >>> peaks_true= nk.signal_findpeaks(signal)["Peaks"] >>> peaks= np.delete(peaks_true,[5, 15]) # create gaps >>> peaks= np.sort(np.append(peaks,[1350, 11350, 18350])) # add artifacts >>> >>> peaks_corrected= nk.signal_fixpeaks(peaks=peaks, interval_min=0.5, interval_

˓→max=1.5, method="neurokit") >>> # Plot and shift original peaks to the rightto see the difference. >>> fig= nk.events_plot([peaks+ 50, peaks_corrected], signal) >>> fig

References

• Lipponen, J. A., & Tarvainen, M. P. (2019). A robust algorithm for heart rate variability time

series artefact correction using novel beat classification. Journal of medical engineering & technology, 43(3), 173-181. 10.1080/03091902.2019.1640306 signal_formatpeaks(info, desired_length, peak_indices=None) Transforms an peak-info dict to a signal of given length. signal_interpolate(x_values, y_values, x_new=None, method='quadratic') Interpolate a signal. Interpolate a signal using different methods. Parameters • x_values (Union[list, np.array, pd.Series]) – The samples corresponding to the values to be interpolated. • y_values (Union[list, np.array, pd.Series]) – The values to be interpolated. • x_new (Union[list, np.array, pd.Series] or int) – The samples at which to interpolate the y_values. Samples before the first value in x_values or after the last value in x_values will be extrapolated. If an integer is passed, nex_x will be considered as the desired length of the interpolated signal between the first and the last values of x_values. No extrapolation will be done for values before or after the first and the last valus of x_values. • method (str) – Method of interpolation. Can be ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, ‘next’ or ‘monotone_cubic’. ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘pre- vious’ and ‘next’ simply return the previous or next value of the point) or as an in- teger specifying the order of the spline interpolator to use. See https://docs.scipy.org/ doc/scipy/reference/generated/scipy.interpolate.PchipInterpolator.html for details on the ‘monotone_cubic’ method. Returns array – Vector of interpolated samples.

7.8. Signal Processing 175 NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import neurokit2 as nk >>> import matplotlib.pyplot as plt >>> >>> # Generate points >>> signal= nk.signal_simulate(duration=1, sampling_rate=10) >>> >>> # List all interpolation methods and interpolation parameters >>> interpolation_methods=["zero","linear","quadratic","cubic",5,"nearest",

˓→ "monotone_cubic"] >>> x_values= np.linspace(0,1, num=10) >>> x_new= np.linspace(0,1, num=1000) >>> >>> # Visualize all interpolations >>> fig, ax= plt.subplots() >>> ax.scatter(x_values, signal, label="original datapoints", zorder=3) >>> for im in interpolation_methods: ... signal_interpolated= nk.signal_interpolate(x_values, signal, x_new=x_new,

˓→ method=im) ... ax.plot(x_new, signal_interpolated, label=im) >>> ax.legend(loc="upper left") signal_merge(signal1, signal2, time1=[0, 10], time2=[0, 10]) Arbitrary addition of two signals with different time ranges. Parameters • signal1 (Union[list, np.array, pd.Series]) – The first signal (i.e., a time series)s in the form of a vector of values. • signal2 (Union[list, np.array, pd.Series]) – The second signal (i.e., a time series)s in the form of a vector of values. • time1 (list) – Lists containing two numeric values corresponding to the beginning and end of signal1. • time2 (list) – Same as above, but for signal2. Returns array – Vector containing the sum of the two signals.

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal1= np.cos(np.linspace(start=0, stop=10, num=100)) >>> signal2= np.cos(np.linspace(start=0, stop=20, num=100)) >>> >>> signal= nk.signal_merge(signal1, signal2, time1=[0, 10], time2=[-5,5]) >>> nk.signal_plot(signal) signal_period(peaks, sampling_rate=1000, desired_length=None, interpola- tion_method='monotone_cubic') Calculate signal period from a series of peaks. Parameters

176 Chapter 7. Functions NeuroKit2, Release 0.0.39

• peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with sig- nal_findpeaks(). If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., ecg_findpeaks() or rsp_findpeaks(). • sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000. • desired_length (int) – If left at the default None, the returned period will have the same number of elements as peaks. If set to a value larger than the sample at which the last peak occurs in the signal (i.e., peaks[-1]), the returned period will be interpolated between peaks over desired_length samples. To interpolate the period over the entire duration of the signal, set desired_length to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to None. • interpolation_method (str) – Method used to interpolate the rate between peaks. See signal_interpolate(). ‘monotone_cubic’ is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiolog- ically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity. Returns array – A vector containing the period. See also: signal_findpeaks(), signal_fixpeaks(), signal_plot()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1) >>> info= nk.signal_findpeaks(signal) >>> >>> period= nk.signal_period(peaks=info["Peaks"], desired_length=len(signal)) >>> nk.signal_plot(period) signal_phase(signal, method='radians') Compute the phase of the signal. The real phase has the property to rotate uniformly, leading to a uniform distribution density. The prophase typically doesn’t fulfill this property. The following functions applies a nonlinear transformation to the phase signal that makes its distribution exactly uniform. If a binary vector is provided (containing 2 unique values), the function will compute the phase of completion of each phase as denoted by each value. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • method (str) – The values in which the phase is expressed. Can be ‘radians’ (default), ‘degrees’ (for values between 0 and 360) or ‘percents’ (for values between 0 and 1). See also: signal_filter(), signal_zerocrossings(), signal_findpeaks()

Returns array – A vector containing the phase of the signal, between 0 and 2*pi.

7.8. Signal Processing 177 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10) >>> phase= nk.signal_phase(signal) >>> nk.signal_plot([signal, phase]) >>> >>> rsp= nk.rsp_simulate(duration=30) >>> phase= nk.signal_phase(rsp, method="degrees") >>> nk.signal_plot([rsp, phase]) >>> >>> # Percentage of completion of two phases >>> signal= nk.signal_binarize(nk.signal_simulate(duration=10)) >>> phase= nk.signal_phase(signal, method="percents") >>> nk.signal_plot([signal, phase]) signal_plot(signal, sampling_rate=None, subplots=False, standardize=False, labels=None, **kwargs) Plot signal with events as vertical lines. Parameters • signal (array or DataFrame) – Signal array (can be a dataframe with many signals). • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Needs to be supplied if the data should be plotted over time in seconds. Otherwise the data is plotted over samples. Defaults to None. • subplots (bool) – If True, each signal is plotted in a subplot. • standardize (bool) – If True, all signals will have the same scale (useful for visualisa- tion). • labels (str or list) – Defaults to None. • **kwargs (optional) – Arguments passed to matplotlib plotting.

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, sampling_rate=1000) >>> nk.signal_plot(signal, labels='signal1', sampling_rate=1000, color="red") >>> >>> data= pd.DataFrame({"Signal2": np.cos(np.linspace(start=0, stop=20,

˓→num=1000)), ... "Signal3": np.sin(np.linspace(start=0, stop=20,

˓→num=1000)), ... "Signal4": nk.signal_binarize(np.cos(np.linspace(start=0,

˓→ stop=40, num=1000)))}) >>> nk.signal_plot(data, labels=['signal_1','signal_2','signal_3'],

˓→subplots=False) >>> nk.signal_plot([signal, data], standardize=True) signal_power(signal, frequency_band, sampling_rate=1000, continuous=False, show=False, **kwargs) Compute the power of a signal in a given frequency band.

178 Chapter 7. Functions NeuroKit2, Release 0.0.39

Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • frequency_band (tuple or list) – Tuple or list of tuples indicating the range of frequencies to compute the power in. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). • continuous (bool) – Compute instant frequency, or continuous power. • show (bool) – If True, will return a Poincaré plot. Defaults to False. • **kwargs – Keyword arguments to be passed to signal_psd(). See also: signal_filter(), signal_psd()

Returns pd.DataFrame – A DataFrame containing the Power Spectrum values and a plot if show is True.

Examples

>>> import neurokit2 as nk >>> import numpy as np >>> >>> # Instant power >>> signal= nk.signal_simulate(frequency=5)+ 0.5 *nk.signal_ ˓→simulate(frequency=20) >>> power_plot= nk.signal_power(signal, frequency_band=[(18, 22),(10, 14)],

˓→method="welch", show=True) >>> power_plot >>> >>> # Continuous (simulated signal) >>> signal= np.concatenate((nk.ecg_simulate(duration=30, heart_rate=75), nk.ecg_

˓→simulate(duration=30, heart_rate=85))) >>> power= nk.signal_power(signal, frequency_band=[(72/60, 78/60),(82/60, 88/

˓→60)], continuous=True) >>> processed,_= nk.ecg_process(signal) >>> power["ECG_Rate"]= processed["ECG_Rate"] >>> nk.signal_plot(power, standardize=True) >>> >>> # Continuous (real signal) >>> signal= nk.data("bio_eventrelated_100hz")["ECG"] >>> power= nk.signal_power(signal, sampling_rate=100, frequency_band=[(0.12, 0.

˓→15),(0.15, 0.4)], continuous=True) >>> processed,_= nk.ecg_process(signal, sampling_rate=100) >>> power["ECG_Rate"]= processed["ECG_Rate"] >>> nk.signal_plot(power, standardize=True) signal_psd(signal, sampling_rate=1000, method='welch', show=True, min_frequency=0, max_frequency=inf, window=None, ar_order=15, order_criteria='KIC', or- der_corrected=True, burg_norm=True) Compute the Power Spectral Density (PSD). Parameters

7.8. Signal Processing 179 NeuroKit2, Release 0.0.39

• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). • method (str) – Either ‘multitapers’ (default; requires the ‘mne’ package), or ‘welch’ (requires the ‘scipy’ package). • show (bool) – If True, will return a plot. If False, will return the density values that can be plotted externally. • min_frequency (float) – The minimum frequency. • max_frequency (float) – The maximum frequency. • window (int) – Length of each window in seconds (for Welch method). If None (default), window will be automatically calculated to capture at least 2 cycles of min_frequency. If the length of recording does not allow the formal, window will be default to half of the length of recording. • ar_order (int) – The order of autoregression (for AR methods e.g. Burg). • order_criteria (str) – The criteria to automatically select order in parametric PSD (for AR methods e.g. Burg). • order_corrected (bool) – Specify for AIC and KIC order_criteria. If unsure which method to use to choose the order, rely on the default of corrected KIC. • bug_norm (bool) – Normalization for Burg method. See also: signal_filter(), mne.time_frequency.psd_array_multitaper(), scipy.signal. welch()

Returns pd.DataFrame – A DataFrame containing the Power Spectrum values and a plot if show is True.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(frequency=5)+ 0.5 *nk.signal_ ˓→simulate(frequency=20) >>> >>> fig1= nk.signal_psd(signal, method="multitapers") >>> fig1 >>> fig2= nk.signal_psd(signal, method="welch", min_frequency=1) >>> fig2 >>> fig3= nk.signal_psd(signal, method="burg", min_frequency=1) >>> >>> data= nk.signal_psd(signal, method="multitapers", max_frequency=30,

˓→show=False) >>> fig4= data.plot(x="Frequency",y="Power") >>> fig4 >>> data= nk.signal_psd(signal, method="welch", max_frequency=30, show=False,

˓→min_frequency=1) >>> fig5= data.plot(x="Frequency",y="Power") >>> fig5

180 Chapter 7. Functions NeuroKit2, Release 0.0.39 signal_rate(peaks, sampling_rate=1000, desired_length=None, interpola- tion_method='monotone_cubic') Calculate signal rate from a series of peaks. This function can also be called either via ecg_rate(), `ppg_rate() or rsp_rate() (aliases provided for consistency). Parameters • peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with sig- nal_findpeaks(). If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., ecg_findpeaks() or rsp_findpeaks(). • sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000. • desired_length (int) – If left at the default None, the returned rated will have the same number of elements as peaks. If set to a value larger than the sample at which the last peak occurs in the signal (i.e., peaks[-1]), the returned rate will be interpolated between peaks over desired_length samples. To interpolate the rate over the entire duration of the signal, set desired_length to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to None. • interpolation_method (str) – Method used to interpolate the rate between peaks. See signal_interpolate(). ‘monotone_cubic’ is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiolog- ically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity. Returns array – A vector containing the rate. See also: signal_findpeaks(), signal_fixpeaks(), signal_plot()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1) >>> info= nk.signal_findpeaks(signal) >>> >>> rate= nk.signal_rate(peaks=info["Peaks"], desired_length=len(signal)) >>> fig= nk.signal_plot(rate) >>> fig signal_recompose(components, method='wcorr', threshold=0.5, keep_sd=None, **kwargs) Combine signal sources after decomposition. Combine and reconstruct meaningful signal sources after signal decomposition. Parameters • components (array) – Array of components obtained via signal_decompose(). • method (str) – The decomposition method. Can be one of ‘wcorr’. • threshold (float) – The threshold used to group components together.

7.8. Signal Processing 181 NeuroKit2, Release 0.0.39

• keep_sd (float) – If a float is specified, will only keep the reconstructed components that are superior or equal to that percentage of the max standard deviaiton (SD) of the components. For instance, keep_sd=0.01 will remove all components with SD is lower that 1% of the max SD. This can be used to filter out noise. • **kwargs – Other arguments to override for instance metric='chebyshev'. Returns Array – Components of the recomposed components.

Examples

>>> import neurokit2 as nk >>> >>> # Create complex signal >>> signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) # High freq >>> signal+=3 * nk.signal_simulate(duration=10, frequency=3, noise=0.01)# ˓→Higher freq >>> signal+=3 * np.linspace(0,2, len(signal)) # Add baseline and trend >>> signal+=2 * nk.signal_simulate(duration=10, frequency=0.1, noise=0) >>> >>> # Decompose signal >>> components= nk.signal_decompose(signal, method='emd') >>> >>> # Recompose >>> recomposed= nk.signal_recompose(components, method='wcorr', threshold=0.90) >>> fig= nk.signal_plot(components) # Visualize components >>> fig signal_resample(signal, desired_length=None, sampling_rate=None, desired_sampling_rate=None, method='interpolation') Resample a continuous signal to a different length or sampling rate. Up- or down-sample a signal. The user can specify either a desired length for the vector, or input the orig- inal sampling rate and the desired sampling rate. See https://github.com/neuropsychology/NeuroKit/scripts/ resampling.ipynb for a comparison of the methods. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • desired_length (int) – The desired length of the signal. • sampling_rate (int) – The original sampling frequency (in Hz, i.e., samples/second). • desired_sampling_rate (int) – The desired (output) sampling frequency (in Hz, i.e., sam- ples/second). • method (str) – Can be ‘interpolation’ (see scipy.ndimage.zoom()), ‘numpy’ for numpy’s interpolation (see numpy.interp()),’pandas’ for Pandas’ time series resampling, ‘poly’ (see scipy.signal.resample_poly()) or ‘FFT’ (see scipy.signal.resample()) for the Fourier method. FFT is the most accurate (if the signal is periodic), but becomes exponentially slower as the signal length increases. In contrast, ‘interpolation’ is the fastest, followed by ‘numpy’, ‘poly’ and ‘pandas’. Returns array – Vector containing resampled signal values.

182 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= np.cos(np.linspace(start=0, stop=20, num=100)) >>> >>> # Downsample >>> downsampled_interpolation= nk.signal_resample(signal, method="interpolation", ... sampling_rate=1000, desired_

˓→sampling_rate=500) >>> downsampled_fft= nk.signal_resample(signal, method="FFT", ... sampling_rate=1000, desired_sampling_

˓→rate=500) >>> downsampled_poly= nk.signal_resample(signal, method="poly", ... sampling_rate=1000, desired_sampling_

˓→rate=500) >>> downsampled_numpy= nk.signal_resample(signal, method="numpy", ... sampling_rate=1000, desired_sampling_

˓→rate=500) >>> downsampled_pandas= nk.signal_resample(signal, method="pandas", ... sampling_rate=1000, desired_sampling_

˓→rate=500) >>> >>> # Upsample >>> upsampled_interpolation= nk.signal_resample(downsampled_interpolation, ... method="interpolation", ... sampling_rate=500, desired_

˓→sampling_rate=1000) >>> upsampled_fft= nk.signal_resample(downsampled_fft, method="FFT", ... sampling_rate=500, desired_sampling_

˓→rate=1000) >>> upsampled_poly= nk.signal_resample(downsampled_poly, method="poly", ... sampling_rate=500, desired_sampling_

˓→rate=1000) >>> upsampled_numpy= nk.signal_resample(downsampled_numpy, method="numpy", ... sampling_rate=500, desired_sampling_

˓→rate=1000) >>> upsampled_pandas= nk.signal_resample(downsampled_pandas, method="pandas", ... sampling_rate=500, desired_sampling_

˓→rate=1000) >>> >>> # Compare with original >>> fig= pd.DataFrame({"Original": signal, ... "Interpolation": upsampled_interpolation, ... "FFT": upsampled_fft, ... "Poly": upsampled_poly, ... "Numpy": upsampled_numpy, ... "Pandas": upsampled_pandas}).plot(style='.-') >>> fig >>> >>> # Timing benchmarks >>> %timeit nk.signal_resample(signal, method="interpolation", ... sampling_rate=1000, desired_sampling_rate=500) >>> %timeit nk.signal_resample(signal, method="FFT", ... sampling_rate=1000, desired_sampling_rate=500) (continues on next page)

7.8. Signal Processing 183 NeuroKit2, Release 0.0.39

(continued from previous page) >>> %timeit nk.signal_resample(signal, method="poly", ... sampling_rate=1000, desired_sampling_rate=500) >>> %timeit nk.signal_resample(signal, method="numpy", ... sampling_rate=1000, desired_sampling_rate=500) >>> %timeit nk.signal_resample(signal, method="pandas", ... sampling_rate=1000, desired_sampling_rate=500)

See also: scipy.signal.resample_poly(), scipy.signal.resample(), scipy.ndimage.zoom() signal_simulate(duration=10, sampling_rate=1000, frequency=1, amplitude=0.5, noise=0, silent=False) Simulate a continuous signal. Parameters • duration (float) – Desired length of duration (s). • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). • frequency (float or list) – Oscillatory frequency of the signal (in Hz, i.e., oscillations per second). • amplitude (float or list) – Amplitude of the oscillations. • noise (float) – Noise level (amplitude of the laplace noise). • silent (bool) – If False (default), might print warnings if impossible frequencies are queried. Returns array – The simulated signal.

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> fig= pd.DataFrame({"1Hz": nk.signal_simulate(duration=5, frequency=1), ... "2Hz": nk.signal_simulate(duration=5, frequency=2), ... "Multi": nk.signal_simulate(duration=5, frequency=[0.5,

˓→3], amplitude=[0.5, 0.2])}).plot() >>> fig signal_smooth(signal, method='convolution', kernel='boxzen', size=10, alpha=0.1) Signal smoothing. Signal smoothing can be achieved using either the convolution of a filter kernel with the input signal to compute the smoothed signal (Smith, 1997) or a LOESS regression. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • method (str) – Can be one of ‘convolution’ (default) or ‘loess’. • kernel (Union[str, np.array]) – Only used if method is ‘convolution’. Type of kernel to use; if array, use directly as the kernel. Can be one of ‘median’, ‘boxzen’, ‘box- car’, ‘triang’, ‘blackman’, ‘hamming’, ‘hann’, ‘bartlett’, ‘flattop’, ‘parzen’, ‘bohman’,

184 Chapter 7. Functions NeuroKit2, Release 0.0.39

‘blackmanharris’, ‘nuttall’, ‘barthann’, ‘kaiser’ (needs beta), ‘gaussian’ (needs std), ‘gen- eral_gaussian’ (needs power, width), ‘slepian’ (needs width) or ‘chebwin’ (needs attenu- ation). • size (int) – Only used if method is ‘convolution’. Size of the kernel; ignored if kernel is an array. • alpha (float) – Only used if method is ‘loess’. The parameter which controls the degree of smoothing. Returns array – Smoothed signal. See also: fit_loess()

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= np.cos(np.linspace(start=0, stop=10, num=1000)) >>> distorted= nk.signal_distort(signal, noise_amplitude=[0.3, 0.2, 0.1, 0.05],

˓→noise_frequency=[5, 10, 50, 100]) >>> >>> size= len(signal)/100 >>> signals= pd.DataFrame({"Raw": distorted, ... "Median": nk.signal_smooth(distorted, kernel='median',

˓→ size=size-1), ... "BoxZen": nk.signal_smooth(distorted, kernel='boxzen',

˓→ size=size), ... "Triang": nk.signal_smooth(distorted, kernel='triang',

˓→ size=size), ... "Blackman": nk.signal_smooth(distorted, kernel=

˓→'blackman', size=size), ... "Loess_01": nk.signal_smooth(distorted, method='loess

˓→', alpha=0.1), ... "Loess_02": nk.signal_smooth(distorted, method='loess

˓→', alpha=0.2), ... "Loess_05": nk.signal_smooth(distorted, method='loess

˓→', alpha=0.5)}) >>> fig= signals.plot() >>> fig_magnify= signals[50:150].plot() # Magnify >>> fig_magnify

References

• Smith, S. W. (1997). The scientist and engineer’s guide to digital signal processing. signal_synchrony(signal1, signal2, method='hilbert', window_size=50) Compute the synchrony (coupling) between two signals. Compute a continuous index of coupling between two signals either using the ‘Hilbert’ method to get the in- stantaneous phase synchrony, or using rolling window correlation.

7.8. Signal Processing 185 NeuroKit2, Release 0.0.39

The instantaneous phase synchrony measures the phase similarities between signals at each timepoint. The phase refers to the angle of the signal, calculated through the hilbert transform, when it is resonating between -pi to pi degrees. When two signals line up in phase their angular difference becomes zero. For less clean signals, windowed correlations are widely used because of their simplicity, and can be a good a robust approximation of synchrony between two signals. The limitation is the need to select a window. Parameters • signal1 (Union[list, np.array, pd.Series]) – Time series in the form of a vector of values. • signal2 (Union[list, np.array, pd.Series]) – Time series in the form of a vector of values. • method (str) – The method to use. Can be one of ‘hilbert’ or ‘correlation’. • window_size (int) – Only used if method=’correlation’. The number of samples to use for rolling correlation. See also: signal_filter(), signal_zerocrossings(), signal_findpeaks()

Returns array – A vector containing the phase of the signal, between 0 and 2*pi.

Examples

>>> import neurokit2 as nk >>> >>> signal1= nk.signal_simulate(duration=10, frequency=1) >>> signal2= nk.signal_simulate(duration=10, frequency=1.5) >>> >>> coupling_h= nk.signal_synchrony(signal1, signal2, method="hilbert") >>> coupling_c= nk.signal_synchrony(signal1, signal2, method="correlation",

˓→window_size=1000/2) >>> >>> fig= nk.signal_plot([signal1, signal2, coupling_h, coupling_c]) >>> fig

References

• http://jinhyuncheong.com/jekyll/update/2017/12/10/Timeseries_synchrony_tutorial_and_simulations. html signal_zerocrossings(signal, direction='both') Locate the indices where the signal crosses zero. Note that when the signal crosses zero between two points, the first index is returned. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • direction (str) – Direction in which the signal crosses zero, can be “positive”, “negative” or “both” (default). Returns array – Vector containing the indices of zero crossings.

186 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import neurokit2 as nk >>> >>> signal= np.cos(np.linspace(start=0, stop=15, num=1000)) >>> zeros= nk.signal_zerocrossings(signal) >>> fig= nk.events_plot(zeros, signal) >>> fig >>> >>> # Only upward or downward zerocrossings >>> up= nk.signal_zerocrossings(signal, direction='up') >>> down= nk.signal_zerocrossings(signal, direction='down') >>> fig= nk.events_plot([up, down], signal) >>> fig

7.9 Events

Submodule for NeuroKit. events_find(event_channel, threshold='auto', threshold_keep='above', start_at=0, end_at=None, duration_min=1, duration_max=None, inter_min=0, discard_first=0, discard_last=0, event_labels=None, event_conditions=None) Find and select events in a continuous signal (e.g., from a photosensor). Parameters • event_channel (array or list) – The channel containing the events. • threshold (str or float) – The threshold value by which to select the events. If “auto”, takes the value between the max and the min. • threshold_keep (str) – “above” or “below”, define the events as above or under the threshold. For photosensors, a white screen corresponds usually to higher values. There- fore, if your events are signaled by a black colour, events values are the lower ones, and you should set the cut to “below”. • start_at (int) – Keep events which onset is after a particular time point. • end_at (int) – Keep events which onset is before a particular time point. • duration_min (int) – The minimum duration of an event to be considered as such (in time points). • duration_max (int) – The maximum duration of an event to be considered as such (in time points). • inter_min (int) – The minimum duration after an event for the subsequent event to be considered as such (in time points). Useful when spurious consecutive events are created due to very high sampling rate. • discard_first (int) – Discard first or last n events. Useful if the experiment starts with some spurious events. If discard_first=0, no first event is removed. • discard_last (int) – Discard first or last n events. Useful if the experiment ends with some spurious events. If discard_last=0, no last event is removed. • event_labels (list) – A list containing unique event identifiers. If None, will use the event index number.

7.9. Events 187 NeuroKit2, Release 0.0.39

• event_conditions (list) – An optional list containing, for each event, for example the trial category, group or experimental conditions. Returns dict – Dict containing 3 or 4 arrays, ‘onset’ for event onsets, ‘duration’ for event durations, ‘label’ for the event identifiers and the optional ‘conditions’ passed to event_conditions. See also: events_plot(), events_to_mne()

Example

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=4) >>> events= nk.events_find(signal) >>> events {'onset': array(...), 'duration': array(...), 'label': array(...)} >>> >>> nk.events_plot(events, signal)

events_plot(events, signal=None, show=True, color='red', linestyle='--') Plot events in signal. Parameters • events (list or ndarray or dict) – Events onset location. Can also be a list of lists, in which case it will mark them with different colors. If a dict is passed (e.g., from ‘events_find()’), will select only the ‘onset’ list. • signal (array or DataFrame) – Signal array (can be a dataframe with many signals). • show (bool) – If True, will return a plot. If False, will return a DataFrame that can be plotted externally. • color (str) – Argument passed to matplotlib plotting. • linestyle (str) – Argument passed to matplotlib plotting. Returns fig – Figure representing a plot of the signal and the event markers. See also: events_find()

188 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> fig= nk.events_plot([1,3,5]) >>> fig >>> >>> # With signal >>> signal= nk.signal_simulate(duration=4) >>> events= nk.events_find(signal) >>> fig1= nk.events_plot(events, signal) >>> fig1 >>> >>> # Different events >>> events1= events["onset"] >>> events2= np.linspace(0, len(signal),8) >>> fig2= nk.events_plot([events1, events2], signal) >>> fig2 >>> >>> # Conditions >>> events= nk.events_find(signal, event_conditions=["A","B","A","B"]) >>> fig3= nk.events_plot(events, signal) >>> fig3 >>> >>> # Different colors for all events >>> signal= nk.signal_simulate(duration=20) >>> events= nk.events_find(signal) >>> events=[[i] fori in events['onset']] >>> fig4= nk.events_plot(events, signal) >>> fig4 events_to_mne(events, event_conditions=None) Create MNE compatible events for integration with M/EEG. Parameters • events (list or ndarray or dict) – Events onset location. Can also be a dict obtained through ‘events_find()’. • event_conditions (list) – An optional list containing, for each event, for example the trial category, group or experimental conditions. Defaults to None. Returns tuple – MNE-formatted events and the event id, that can be added via ‘raw.add_events(events), and a dictionary with event’s names. See also: events_find()

7.9. Events 189 NeuroKit2, Release 0.0.39

Examples

>>> import numpy as np >>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=4) >>> events= nk.events_find(signal) >>> events, event_id= nk.events_to_mne(events) >>> events array([[ 1, 0, 0], [1001, 0, 0], [2001, 0, 0], [3001, 0, 0]]) >>> event_id {'event': 0} >>> >>> # Conditions >>> events= nk.events_find(signal, event_conditions=["A","B","A","B"]) >>> events, event_id= nk.events_to_mne(events) >>> event_id {'B': 0, 'A': 1}

7.10 Data

Submodule for NeuroKit. data(dataset='bio_eventrelated_100hz') Download example datasets. Download and load available example datasets. Note that an internet connexion is necessary. Parameters dataset (str) – The name of the dataset. The list and description is available here. Returns DataFrame – The data.

Examples

>>> import neurokit2 as nk >>> >>> data= nk.data("bio_eventrelated_100hz") read_acqknowledge(filename, sampling_rate='max', resample_method='interpolation', im- pute_missing=True) Read and format a BIOPAC’s AcqKnowledge file into a pandas’ dataframe. The function outputs both the dataframe and the sampling rate (encoded within the AcqKnowledge) file. Parameters • filename (str) – Filename (with or without the extension) of a BIOPAC’s AcqKnowledge file. • sampling_rate (int) – Sampling rate (in Hz, i.e., samples/second). Since an AcqKnowl- edge file can contain signals recorded at different rates, harmonization is necessary in order to convert it to a DataFrame. Thus, if sampling_rate is set to ‘max’ (default), will keep the maximum recorded sampling rate and upsample the channels with lower rate if

190 Chapter 7. Functions NeuroKit2, Release 0.0.39

necessary (using the signal_resample() function). If the sampling rate is set to a given value, will resample the signals to the desired value. Note that the value of the sampling rate is outputted along with the data. • resample_method (str) – Method of resampling (see signal_resample()). • impute_missing (bool) – Sometimes, due to connections issues, the signal has some holes (short periods without signal). If ‘impute_missing’ is True, will automatically fill the signal interruptions using padding. Returns • df (DataFrame) – The AcqKnowledge file converted to a dataframe. • sampling rate (int) – The AcqKnowledge file converted to its sampling rate. See also: signal_resample()

Example

>>> import neurokit2 as nk >>> >>> data, sampling_rate= nk.read_acqknowledge('file.acq')

7.11 Epochs

Submodule for NeuroKit. epochs_create(data, events=None, sampling_rate=1000, epochs_start=0, epochs_end=1, event_labels=None, event_conditions=None, baseline_correction=False) Epoching a dataframe. Parameters • data (DataFrame) – A DataFrame containing the different signal(s) as different columns. If a vector of values is passed, it will be transformed in a DataFrame with a single ‘Signal’ column. • events (list or ndarray or dict) – Events onset location. If a dict is passed (e.g., from events_find()), will select only the ‘onset’ list. If an integer is passed, will use this number to create an evenly spaced list of events. If None, will chunk the signal into successive blocks of the set duration. • sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). • epochs_start (int) – Epochs start relative to events_onsets (in seconds). The start can be negative to start epochs before a given event (to have a baseline for instance). • epochs_end (int) – Epochs end relative to events_onsets (in seconds). • event_labels (list) – A list containing unique event identifiers. If None, will use the event index number. • event_conditions (list) – An optional list containing, for each event, for example the trial category, group or experimental conditions. • baseline_correction (bool) – Defaults to False.

7.11. Epochs 191 NeuroKit2, Release 0.0.39

Returns dict – A dict containing DataFrames for all epochs. See also: events_find(), events_plot(), epochs_to_df(), epochs_plot()

Examples

>>> import neurokit2 as nk >>> >>> # Get data >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Find events >>> events= nk.events_find(data["Photosensor"], ... threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> fig1= nk.events_plot(events, data) >>> fig1 >>> >>> # Create epochs >>> epochs= nk.epochs_create(data, events, sampling_rate=100, epochs_end=3) >>> fig2= nk.epochs_plot(epochs) >>> fig2 >>> >>> # Baseline correction >>> epochs= nk.epochs_create(data, events, sampling_rate=100, epochs_end=3,

˓→baseline_correction=True) >>> fig3= nk.epochs_plot(epochs) >>> fig3 >>> >>> # Chunk into n blocks of 1 second >>> epochs= nk.epochs_create(data, sampling_rate=100, epochs_end=1) epochs_plot(epochs, legend=True, show=True) Plot epochs. Parameters • epochs (dict) – A dict containing one DataFrame per event/trial. Usually obtained via epochs_create(). • legend (bool) – Display the legend (the key of each epoch). • show (bool) – If True, will return a plot. If False, will return a DataFrame that can be plotted externally. Returns epochs (dict) – dict containing all epochs. See also: events_find(), events_plot(), epochs_create(), epochs_to_df()

192 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Example with data >>> data= nk.data("bio_eventrelated_100hz") >>> events= nk.events_find(data["Photosensor"], ... threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> epochs= nk.epochs_create(data, events, sampling_rate=200, epochs_end=1) >>> fig1= nk.epochs_plot(epochs) >>> fig1 >>> >>> # Example with ECG Peaks >>> signal= nk.ecg_simulate(duration=10) >>> events= nk.ecg_findpeaks(signal) >>> epochs= nk.epochs_create(signal, events=events["ECG_R_Peaks"], epochs_start=-

˓→0.5, epochs_end=0.5) >>> fig2= nk.epochs_plot(epochs) >>> fig2 epochs_to_array(epochs) Convert epochs to an array. TODO: make it work with uneven epochs (not the same length). Parameters epochs (dict) – A dict containing one DataFrame per event/trial. Usually obtained via epochs_create(). Returns array – An array containing all signals. See also: events_find(), events_plot(), epochs_create(), epochs_plot()

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> # Get data >>> signal= nk.signal_simulate(sampling_rate=100) >>> >>> # Create epochs >>> epochs= nk.epochs_create(signal, events=[400, 430, 460], sampling_rate=100,

˓→epochs_end=1) >>> X= nk.epochs_to_array(epochs) >>> nk.signal_plot(X.T) epochs_to_df(epochs) Convert epochs to a DataFrame. Parameters epochs (dict) – A dict containing one DataFrame per event/trial. Usually obtained via epochs_create(). Returns DataFrame – A DataFrame containing all epochs identifiable by the ‘Label’ column, which time axis is stored in the ‘Time’ column.

7.11. Epochs 193 NeuroKit2, Release 0.0.39

See also: events_find(), events_plot(), epochs_create(), epochs_plot()

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> # Get data >>> data= pd.read_csv("https://raw.githubusercontent.com/neuropsychology/

˓→NeuroKit/dev/data/bio_eventrelated_100hz.csv") >>> >>> # Find events >>> events= nk.events_find(data["Photosensor"], ... threshold_keep='below', ... event_conditions=["Negative","Neutral","Neutral",

˓→"Negative"]) >>> fig= nk.events_plot(events, data) >>> fig >>> >>> # Create epochs >>> epochs= nk.epochs_create(data, events, sampling_rate=200, epochs_end=3) >>> data= nk.epochs_to_df(epochs)

7.12 Statistics

Submodule for NeuroKit. cor(x, y, method='pearson', show=False) Density estimation. Computes kernel density estimates. Parameters • x (Union[list, np.array, pd.Series]) – Vectors of values. • y (Union[list, np.array, pd.Series]) – Vectors of values. • method (str) – Correlation method. Can be one of ‘pearson’, ‘spearman’, ‘kendall’. • show (bool) – Draw a scatterplot with a regression line. Returns r – The correlation coefficient.

Examples

>>> import neurokit2 as nk >>> >>> x=[1,2,3,4,5] >>> y=[3,1,5,6,6] >>> corr= nk.cor(x,y, method="pearson", show=True) >>> corr

194 Chapter 7. Functions NeuroKit2, Release 0.0.39 density(x, desired_length=100, bandwith=1, show=False) Density estimation. Computes kernel density estimates. Parameters • x (Union[list, np.array, pd.Series]) – A vector of values. • desired_length (int) – The amount of values in the returned density estimation. • bandwith (float) – The bandwith of the kernel. The smaller the values, the smoother the estimation. • show (bool) – Display the density plot. Returns • x, y – The x axis of the density estimation. • y – The y axis of the density estimation.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.ecg_simulate(duration=20) >>> x,y= nk.density(signal, bandwith=0.5, show=True) >>> >>> # Bandwidth comparison >>> x, y1= nk.density(signal, bandwith=0.5) >>> x, y2= nk.density(signal, bandwith=1) >>> x, y3= nk.density(signal, bandwith=2) >>> pd.DataFrame({"x":x,"y1": y1,"y2": y2,"y3": y3}).plot(x="x") distance(X=None, method='mahalanobis') Distance. Compute distance using different metrics. Parameters • X (array or DataFrame) – A dataframe of values. • method (str) – The method to use. One of ‘mahalanobis’ or ‘mean’ for the average distance from the mean. Returns array – Vector containing the distance values.

Examples

>>> from sklearn import datasets >>> import neurokit2 as nk >>> >>> X= datasets.load_iris().data >>> vector= distance(X) >>> vector fit_error(y, y_predicted, n_parameters=2) Calculate the fit error for a model.

7.12. Statistics 195 NeuroKit2, Release 0.0.39

Also specific and direct access functions can be used, such as fit_mse(), fit_rmse() and fit_r2(). Parameters • y (Union[list, np.array, pd.Series]) – The response variable (the y axis). • y_predicted (Union[list, np.array, pd.Series]) – The fitted data generated by a model. • n_parameters (int) – Number of model parameters (for the degrees of freedom used in R2). Returns dict – A dictionary containing different indices of fit error. See also: fit_mse(), fit_rmse(), fit_r2()

Examples

>>> import neurokit2 as nk >>> >>> y= np.array([-1.0,-0.5,0, 0.5,1]) >>> y_predicted= np.array([0.0,0,0,0,0]) >>> >>> # Master function >>> x= nk.fit_error(y, y_predicted) >>> x >>> >>> # Direct access >>> nk.fit_mse(y, y_predicted) 0.5 >>> >>> nk.fit_rmse(y, y_predicted) 0.7071067811865476 >>> >>> nk.fit_r2(y, y_predicted, adjusted=False) 0.7071067811865475 >>> >>> nk.fit_r2(y, y_predicted, adjusted=True, n_parameters=2) 0.057190958417936755 fit_loess(y, X=None, alpha=0.75, order=2) Local Polynomial Regression (LOESS) Performs a LOWESS (LOcally WEighted Scatter-plot Smoother) regression. Parameters • y (Union[list, np.array, pd.Series]) – The response variable (the y axis). • X (Union[list, np.array, pd.Series]) – Explanatory variable (the x axis). If ‘None’, will treat y as a continuous signal (useful for smoothing). • alpha (float) – The parameter which controls the degree of smoothing, which corresponds to the proportion of the samples to include in local regression. • order (int) – Degree of the polynomial to fit. Can be 1 or 2 (default). Returns array – Prediciton of the LOESS algorithm. See also: signal_smooth(), signal_detrend(), fit_error()

196 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> signal= np.cos(np.linspace(start=0, stop=10, num=1000)) >>> distorted= nk.signal_distort(signal, noise_amplitude=[0.3, 0.2, 0.1], noise_

˓→frequency=[5, 10, 50]) >>> >>> pd.DataFrame({"Raw": distorted,"Loess_1": nk.fit_loess(distorted, order=1), ... "Loess_2": nk.fit_loess(distorted, order=2)}).plot()

References

• https://simplyor.netlify.com/loess-from-scratch-in-python-animation.en-us/ fit_mixture(X=None, n_clusters=2) Gaussian Mixture Model. Performs a polynomial regression of given order. Parameters • X (Union[list, np.array, pd.Series]) – The values to classify. • n_clusters (int) – Number of components to look for. Returns pd.DataFrame – DataFrame containing the probability of belongning to each cluster. See also: signal_detrend(), fit_error()

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> x= nk.signal_simulate() >>> probs= nk.fit_mixture(x, n_clusters=2) >>> fig= nk.signal_plot([x, probs["Cluster_0"], probs["Cluster_1"]],

˓→standardize=True) >>> fig fit_mse(y, y_predicted) Compute Mean Square Error (MSE). fit_polynomial(y, X=None, order=2) Polynomial Regression. Performs a polynomial regression of given order. Parameters • y (Union[list, np.array, pd.Series]) – The response variable (the y axis). • X (Union[list, np.array, pd.Series]) – Explanatory variable (the x axis). If ‘None’, will treat y as a continuous signal.

7.12. Statistics 197 NeuroKit2, Release 0.0.39

• order (int) – The order of the polynomial. 0, 1 or > 1 for a baseline, linear or polynomial fit, respectively. Can also be ‘auto’, it which case it will attempt to find the optimal order to minimize the RMSE. Returns array – Prediction of the regression. See also: signal_detrend(), fit_error(), fit_polynomial_findorder()

Examples

>>> import pandas as pd >>> import neurokit2 as nk >>> >>> y= np.cos(np.linspace(start=0, stop=10, num=100)) >>> >>> pd.DataFrame({"y":y,"Poly_0": nk.fit_polynomial(y, order=0), ... "Poly_1": nk.fit_polynomial(y, order=1), ... "Poly_2": nk.fit_polynomial(y, order=2), ... "Poly_3": nk.fit_polynomial(y, order=3),"Poly_5": nk.fit_

˓→polynomial(y, order=5), ... "Poly_auto": nk.fit_polynomial(y, order='auto')}).plot() fit_polynomial_findorder(y, X=None, max_order=6) Polynomial Regression. Find the optimal order for polynomial fitting. Currently, the only method implemented is RMSE minimization. Parameters • y (Union[list, np.array, pd.Series]) – The response variable (the y axis). • X (Union[list, np.array, pd.Series]) – Explanatory variable (the x axis). If ‘None’, will treat y as a continuous signal. • max_order (int) – The maximum order to test. Returns int – Optimal order. See also: fit_polynomial()

Examples

>>> import neurokit2 as nk >>> >>> y= np.cos(np.linspace(start=0, stop=10, num=100)) >>> >>> nk.fit_polynomial_findorder(y, max_order=10) 9 fit_r2(y, y_predicted, adjusted=True, n_parameters=2) Compute R2. fit_rmse(y, y_predicted) Compute Root Mean Square Error (RMSE).

198 Chapter 7. Functions NeuroKit2, Release 0.0.39 hdi(x, ci=0.95, show=False, **kwargs) Highest Density Interval (HDI) Compute the Highest Density Interval (HDI) of a distribution. All points within this interval have a higher probability density than points outside the interval. The HDI can be used in the context of uncertainty charac- terisation of posterior distributions (in the Bayesian farmework) as Credible Interval (CI). Unlike equal-tailed intervals that typically exclude 2.5% from each tail of the distribution and always include the median, the HDI is not equal-tailed and therefore always includes the mode(s) of posterior distributions. Parameters • x (Union[list, np.array, pd.Series]) – A vector of values. • ci (float) – Value of probability of the (credible) interval - CI (between 0 and 1) to be estimated. Default to .95 (95%). • show (bool) – If True, the function will produce a figure. • **kwargs (Line2D properties) – Other arguments to be passed to density(). See also: density()

Returns • float(s) – The HDI low and high limits. • fig – Distribution plot.

Examples

>>> import numpy as np >>> import neurokit2 as nk >>> >>> x= np.random.normal(loc=0, scale=1, size=100000) >>> ci_min, ci_high= nk.hdi(x, ci=0.95, show=True) mad(x, constant=1.4826) Median Absolute Deviation: a “robust” version of standard deviation. Parameters • x (Union[list, np.array, pd.Series]) – A vector of values. • constant (float) – Scale factor. Use 1.4826 for results similar to default R. Returns float – The MAD.

Examples

>>> import neurokit2 as nk >>> nk.mad([2,8,7,5,4, 12,5,1]) 3.7064999999999997

7.12. Statistics 199 NeuroKit2, Release 0.0.39

References

• https://en.wikipedia.org/wiki/Median_absolute_deviation mutual_information(x, y, method='varoquaux', bins=256, sigma=1, normalized=True) Computes the (normalized) mutual information (MI) between two vectors from a joint histogram. The mu- tual information of two variables is a measure of the mutual dependence between them. More specifically, it quantifies the “amount of information” obtained about one variable by observing the other variable. Parameters • x (Union[list, np.array, pd.Series]) – A vector of values. • y (Union[list, np.array, pd.Series]) – A vector of values. • method (str) – Method to use. Can either be ‘varoquaux’ or ‘nolitsa’. • bins (int) – Number of bins to use while creating the histogram. • sigma (float) – Sigma for Gaussian smoothing of the joint histogram. Only used if method==’varoquaux’. • normalized (book) – Compute normalised mutual information. Only used if method==’varoquaux’. Returns float – The computed similariy measure.

Examples

>>> import neurokit2 as nk >>> >>> x=[3,3,5,1,6,3] >>> y=[5,3,1,3,4,5] >>> >>> nk.mutual_information(x,y, method="varoquaux") 0.23600751227291816 >>> >>> nk.mutual_information(x,y, method="nolitsa") 1.4591479170272448

References

• Studholme, jhill & jhawkes (1998). “A normalized entropy measure of 3-D medical image alignment”.

in Proc. Medical Imaging 1998, vol. 3338, San Diego, CA, pp. 132-143. rescale(data, to=[0, 1]) Rescale data. Rescale a numeric variable to a new range. Parameters • data (Union[list, np.array, pd.Series]) – Raw data. • to (list) – New range of values of the data after rescaling. Returns list – The rescaled values.

200 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> nk.rescale(data=[3,1,2,4,6], to=[0,1]) [0.4, 0.0, 0.2, 0.6000000000000001, 1.0] standardize(data, robust=False, window=None, **kwargs) Standardization of data. Performs a standardization of data (Z-scoring), i.e., centering and scaling, so that the data is expressed in terms of standard deviation (i.e., mean = 0, SD = 1) or Median Absolute Deviance (median = 0, MAD = 1). Parameters • data (Union[list, np.array, pd.Series]) – Raw data. • robust (bool) – If True, centering is done by substracting the median from the variables and dividing it by the median absolute deviation (MAD). If False, variables are standard- ized by substracting the mean and dividing it by the standard deviation (SD). • window (int) – Perform a rolling window standardization, i.e., apply a standardization on a window of the specified number of samples that rolls along the main axis of the signal. Can be used for complex detrending. • **kwargs (optional) – Other arguments to be passed to pandas.rolling(). Returns list – The standardized values.

Examples

>>> import neurokit2 as nk >>> import pandas as pd >>> >>> # Simple example >>> nk.standardize([3,1,2,4,6, np.nan]) [...] >>> nk.standardize([3,1,2,4,6, np.nan], robust=True) [...] >>> nk.standardize(np.array([[1,2,3,4],[5,6,7,8]]).T) array(...) >>> nk.standardize(pd.DataFrame({"A":[3,1,2,4,6, np.nan], ... "B":[3,1,2,4,6,5]})) AB 0 ...... >>> >>> # Rolling standardization of a signal >>> signal= nk.signal_simulate(frequency=[0.1,2], sampling_rate=200) >>> z= nk.standardize(signal, window=200) >>> nk.signal_plot([signal,z], standardize=True) summary_plot(x, **kwargs) Descriptive plot. Visualize a distribution with density, histogram, boxplot and rugs plots all at once.

7.12. Statistics 201 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> import numpy as np >>> >>> x= np.random.normal(size=100) >>> fig= nk.summary_plot(x) >>> fig

7.13 Complexity

Submodule for NeuroKit. complexity_apen(signal, delay=1, dimension=2, r='default', corrected=False, **kwargs) Approximate entropy (ApEn) Python implementations of the approximate entropy (ApEn) and its corrected version (cApEn). Approximate entropy is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. The advantages of ApEn include lower computational demand (ApEn can be designed to work for small data samples (< 50 data points) and can be applied in real time) and less sensitive to noise. However, ApEn is heavily dependent on the record length and lacks relative consistency. This function can be called either via entropy_approximate() or complexity_apen(), and the cor- rected version via complexity_capen(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (similarity threshold). It corresponds to the filtering level - max absolute difference between segments. If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • corrected (bool) – If true, will compute corrected ApEn (cApEn), see Porta (2007). • **kwargs – Other arguments. See also: entropy_shannon(), entropy_sample(), entropy_fuzzy()

Returns float – The approximate entropy as float value.

202 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_approximate(signal) >>> entropy1 >>> entropy2= nk.entropy_approximate(signal, corrected=True) >>> entropy2

References

• EntroPy `_ • Sabeti, M., Katebi, S., & Boostani, R. (2009). Entropy and complexity measures for EEG signal classifi- cation of schizophrenic and control participants. Artificial intelligence in medicine, 47(3), 263-274. • Shi, B., Zhang, Y., Yuan, C., Wang, S., & Li, P. (2017). Entropy analysis of short-term heartbeat interval time series during regular walking. Entropy, 19(10), 568. complexity_capen(signal, delay=1, dimension=2, r='default', *, corrected=True, **kwargs) Approximate entropy (ApEn) Python implementations of the approximate entropy (ApEn) and its corrected version (cApEn). Approximate entropy is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. The advantages of ApEn include lower computational demand (ApEn can be designed to work for small data samples (< 50 data points) and can be applied in real time) and less sensitive to noise. However, ApEn is heavily dependent on the record length and lacks relative consistency. This function can be called either via entropy_approximate() or complexity_apen(), and the cor- rected version via complexity_capen(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (similarity threshold). It corresponds to the filtering level - max absolute difference between segments. If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • corrected (bool) – If true, will compute corrected ApEn (cApEn), see Porta (2007). • **kwargs – Other arguments. See also: entropy_shannon(), entropy_sample(), entropy_fuzzy()

Returns float – The approximate entropy as float value.

7.13. Complexity 203 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_approximate(signal) >>> entropy1 >>> entropy2= nk.entropy_approximate(signal, corrected=True) >>> entropy2

References

• EntroPy `_ • Sabeti, M., Katebi, S., & Boostani, R. (2009). Entropy and complexity measures for EEG signal classifi- cation of schizophrenic and control participants. Artificial intelligence in medicine, 47(3), 263-274. • Shi, B., Zhang, Y., Yuan, C., Wang, S., & Li, P. (2017). Entropy analysis of short-term heartbeat interval time series during regular walking. Entropy, 19(10), 568. complexity_cmse(signal, scale='default', dimension=2, r='default', *, composite=True, refined=False, fuzzy=False, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version. Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014) • fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments.

204 Chapter 7. Functions NeuroKit2, Release 0.0.39

Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992. complexity_d2(signal, delay=1, dimension=2, r=64, show=False) Correlation Dimension. Python implementation of the Correlation Dimension D2 of a signal. This function can be called either via fractal_correlation() or complexity_d2(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()).

7.13. Complexity 205 NeuroKit2, Release 0.0.39

• dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (str or int or list) – The sequence of radiuses to test. If an integer is passed, will get an exponential sequence ranging from 2.5% to 50% of the distance range. Methods imple- mented in other packages can be used via setting r='nolds' or r='Corr_Dim'. • show (bool) – Plot of correlation dimension if True. Defaults to False. Returns D2 (float) – The correlation dimension D2.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> >>> fractal1= nk.fractal_correlation(signal,r="nolds", show=True) >>> fractal1 >>> fractal2= nk.fractal_correlation(signal,r=32, show=True) >>> fractal2 >>> >>> signal= nk.rsp_simulate(duration=120, sampling_rate=50) >>> >>> fractal3= nk.fractal_correlation(signal,r="nolds", show=True) >>> fractal3 >>> fractal4= nk.fractal_correlation(signal,r=32, show=True) >>> fractal4

References

• Bolea, J., Laguna, P., Remartínez, J. M., Rovira, E., Navarro, A., & Bailón, R. (2014). Methodological framework for estimating the correlation dimension in HRV signals. Computational and mathematical methods in medicine, 2014. • Boon, M. Y., Henry, B. I., Suttle, C. M., & Dain, S. J. (2008). The correlation dimension: A useful objective measure of the transient visual evoked potential?. Journal of vision, 8(1), 6-6. • nolds • Corr_Dim complexity_delay(signal, delay_max=100, method='fraser1986', show=False) Estimate optimal Time Delay (tau) for time-delay embedding. The time delay (Tau) is one of the two critical parameters involved in the construction of the time-delay embed- ding of a signal. Several authors suggested different methods to guide the choice of Tau: • Fraser and Swinney (1986) suggest using the first local minimum of the mutual information between the delayed and non-delayed time series, effectively identifying a value of tau for which they share the least information. • Theiler (1990) suggested to select Tau where the autocorrelation between the signal and its lagged version at Tau first crosses the value 1/e.

206 Chapter 7. Functions NeuroKit2, Release 0.0.39

• Casdagli (1991) suggests instead taking the first zero-crossing of the autocorrelation. • Rosenstein (1993) suggests to the point close to 40% of the slope of the average displacement from the diagonal (ADFD).

Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay_max (int) – The maximum time delay (Tau or lag) to test. • method (str) – Correlation method. Can be one of ‘fraser1986’, ‘theiler1990’, ‘cas- dagli1991’, ‘rosenstein1993’. • show (bool) – If true, will plot the mutual information values for each value of tau. Returns int – Optimal time delay.

See also: complexity_dimension(), complexity_embedding()

Examples

>>> import neurokit2 as nk >>> >>> # Artifical example >>> signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) >>> nk.signal_plot(signal) >>> >>> delay= nk.complexity_delay(signal, delay_max=1000, show=True, method=

˓→"fraser1986") >>> delay= nk.complexity_delay(signal, delay_max=1000, show=True, method=

˓→"theiler1990") >>> delay= nk.complexity_delay(signal, delay_max=1000, show=True, method=

˓→"casdagli1991") >>> delay= nk.complexity_delay(signal, delay_max=1000, show=True, method=

˓→"rosenstein1993") >>> >>> # Realistic example >>> ecg= nk.ecg_simulate(duration=60 *6, sampling_rate=150) >>> signal= nk.ecg_rate(nk.ecg_peaks(ecg, sampling_rate=150), sampling_rate=150,

˓→desired_length=len(ecg)) >>> nk.signal_plot(signal) >>> >>> delay= nk.complexity_delay(signal, delay_max=1000, show=True)

7.13. Complexity 207 NeuroKit2, Release 0.0.39

References

• Gautama, T., Mandic, D. P., & Van Hulle, M. M. (2003, April). A differential entropy based method for determining the optimal embedding parameters of a signal. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). (Vol. 6, pp. VI-29). IEEE. • Camplani, M., & Cannas, B. (2009). The role of the embedding dimension and time delay in time series forecasting. IFAC Proceedings Volumes, 42(7), 316-320. • Rosenstein, M. T., Collins, J. J., & De Luca, C. J. (1994). Reconstruction expansion as a geometry-based framework for choosing proper delay times. Physica-Section D, 73(1), 82-98. complexity_dfa(signal, windows='default', overlap=True, integrate=True, order=1, multifractal=False, q=2, show=False) (Multifractal) Detrended Fluctuation Analysis (DFA or MFDFA) Python implementation of Detrended Fluctuation Analysis (DFA) or Multifractal DFA of a signal. Detrended fluctuation analysis, much like the Hurst exponent, is used to find long-term statistical dependencies in time series. This function can be called either via fractal_dfa() or complexity_dfa(), and its multifractal variant can be directly accessed via fractal_mfdfa() or complexity_mfdfa() Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • windows (list) – A list containing the lengths of the windows (number of data points in each subseries). Also referred to as ‘lag’ or ‘scale’. If ‘default’, will set it to a logarith- mic scale (so that each window scale hase the same weight) with a minimum of 4 and maximum of a tenth of the length (to have more than 10 windows to calculate the average fluctuation). • overlap (bool) – Defaults to True, where the windows will have a 50% overlap with each other, otherwise non-overlapping windows will be used. • integrate (bool) – It is common practice to convert the signal to a random walk (i.e., detrend and integrate, which corresponds to the signal ‘profile’). Note that it leads to the flattening of the signal, which can lead to the loss of some details (see Ihlen, 2012 for an explanation). Note that for strongly anticorrelated signals, this transformation should be applied two times (i.e., provide np.cumsum(signal - np.mean(signal)) instead of signal). • order (int) – The order of the polynoiam trend, 1 for the linear trend. • multifractal (bool) – If true, compute Multifractal Detrended Fluctuation Analysis (MFDFA), in which case the argument `q is taken into account. • q (list) – The sequence of fractal exponents when multifractal=True. Must be a sequence between -10 and 10 (nota that zero will be removed, since the code does not converge there). Setting q = 2 (default) gives a result close to a standard DFA. For instance, Ihlen (2012) usese `` q=[-5, -3, -1, 0, 1, 3, 5]``. • show (bool) – Visualise the trend between the window size and the fluctuations. Returns dfa (float) – The DFA coefficient.

208 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=3, noise=0.05) >>> dfa1= nk.fractal_dfa(signal, show=True) >>> dfa1 >>> dfa2= nk.fractal_mfdfa(signal,q=np.arange(-3,4), show=True) >>> dfa2

References

• Ihlen, E. A. F. E. (2012). Introduction to multifractal detrended fluctuation analysis in Matlab. Frontiers in physiology, 3, 141. • Hardstone, R., Poil, S. S., Schiavone, G., Jansen, R., Nikulin, V. V., Mansvelder, H. D., & Linkenkaer- Hansen, K. (2012). Detrended fluctuation analysis: a scale-free view on neuronal oscillations. Frontiers in physiology, 3, 450. • nolds • Youtube introduction complexity_dimension(signal, delay=1, dimension_max=20, method='afnn', show=False, R=10.0, A=2.0, **kwargs) Estimate optimal Dimension (m) for time-delay embedding. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see complexity_delay()). • dimension_max (int) – The maximum embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’) to test. • method (str) – Method can either be afnn (average false nearest neighbour) or fnn (false nearest neighbour). • show (bool) – Visualize the result. • R (float) – Relative tolerance (for fnn method). • A (float) – Absolute tolerance (for fnn method) • **kwargs – Other arguments. Returns int – Optimal dimension. See also: complexity_delay(), complexity_embedding()

7.13. Complexity 209 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Artifical example >>> signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) >>> delay= nk.complexity_delay(signal, delay_max=500) >>> >>> values= nk.complexity_dimension(signal, delay=delay, dimension_max=20,

˓→show=True)

References

• Cao, L. (1997). Practical method for determining the minimum embedding dimension of a scalar time series. Physica D: Nonlinear Phenomena, 110(1-2), 43-50. complexity_embedding(signal, delay=1, dimension=3, show=False) Time-delay embedding of a time series (a signal) A dynamical system can be described by a vector of numbers, called its ‘state’, that aims to provide a complete description of the system at some point in time. The set of all possible states is called the ‘state space’. Takens’s (1981) embedding theorem suggests that a sequence of measurements of a dynamic system includes in itself all the information required to completely reconstruct the state space. Delay coordinate embedding attempts to identify the state s of the system at some time t by searching the past history of observations for similar states, and, by studying the evolution of similar states, infer information about the future of the system. How to visualize the dynamics of a system? A sequence of state values over time is called a trajectory. De- pending on the system, different trajectories can evolve to a common subset of state space called an attractor. The presence and behavior of attractors gives intuition about the underlying dynamical system. We can visual- ize the system and its attractors by plotting the trajectory of many different initial state values and numerically integrating them to approximate their continuous time evolution on discrete computers. This function is adapted from EntroPy and is equivalent to the delay_embedding() function from ‘nolds’. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • show (bool) – Plot the reconstructed attractor. Returns array – Embedded time-series, of shape (n_times - (order - 1) * delay, order) See also: embedding_delay(), embedding_dimension()

210 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Artifical example >>> signal= nk.signal_simulate(duration=2, frequency=5, noise=0.01) >>> >>> embedded= nk.complexity_embedding(signal, delay=50, dimension=2, show=True) >>> embedded= nk.complexity_embedding(signal, delay=50, dimension=3, show=True) >>> embedded= nk.complexity_embedding(signal, delay=50, dimension=4, show=True) >>> >>> # Realistic example >>> ecg= nk.ecg_simulate(duration=60 *4, sampling_rate=200) >>> signal= nk.ecg_rate(nk.ecg_peaks(ecg, sampling_rate=200)[0], sampling_

˓→rate=200, desired_length=len(ecg)) >>> >>> embedded= nk.complexity_embedding(signal, delay=250, dimension=2, show=True) >>> embedded= nk.complexity_embedding(signal, delay=250, dimension=3, show=True) >>> embedded= nk.complexity_embedding(signal, delay=250, dimension=4, show=True)

References

• Gautama, T., Mandic, D. P., & Van Hulle, M. M. (2003, April). A differential entropy based method for determining the optimal embedding parameters of a signal. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). (Vol. 6, pp. VI-29). IEEE. complexity_fuzzycmse(signal, scale='default', dimension=2, r='default', *, composite=True, re- fined=False, fuzzy=True, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version. Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014)

7.13. Complexity 211 NeuroKit2, Release 0.0.39

• fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments. Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992. complexity_fuzzyen(signal, delay=1, dimension=2, r='default', **kwargs) Fuzzy entropy (FuzzyEn) Python implementations of the fuzzy entropy (FuzzyEn) of a signal. This function can be called either via entropy_fuzzy() or complexity_fuzzyen(). Parameters

212 Chapter 7. Functions NeuroKit2, Release 0.0.39

• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • **kwargs – Other arguments. Returns float – The fuzzy entropy as float value. See also: entropy_shannon(), entropy_approximate(), entropy_sample()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy= nk.entropy_fuzzy(signal) >>> entropy complexity_fuzzymse(signal, scale='default', dimension=2, r='default', composite=False, re- fined=False, *, fuzzy=True, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version. Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2).

7.13. Complexity 213 NeuroKit2, Release 0.0.39

• composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014) • fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments. Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992. complexity_fuzzyrcmse(signal, scale='default', dimension=2, r='default', composite=False, *, re- fined=True, fuzzy=True, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version.

214 Chapter 7. Functions NeuroKit2, Release 0.0.39

Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014) • fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments. Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

7.13. Complexity 215 NeuroKit2, Release 0.0.39

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992. complexity_mfdfa(signal, windows='default', overlap=True, integrate=True, order=1, *, multifrac- tal=True, q=2, show=False) (Multifractal) Detrended Fluctuation Analysis (DFA or MFDFA) Python implementation of Detrended Fluctuation Analysis (DFA) or Multifractal DFA of a signal. Detrended fluctuation analysis, much like the Hurst exponent, is used to find long-term statistical dependencies in time series. This function can be called either via fractal_dfa() or complexity_dfa(), and its multifractal variant can be directly accessed via fractal_mfdfa() or complexity_mfdfa() Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • windows (list) – A list containing the lengths of the windows (number of data points in each subseries). Also referred to as ‘lag’ or ‘scale’. If ‘default’, will set it to a logarith- mic scale (so that each window scale hase the same weight) with a minimum of 4 and maximum of a tenth of the length (to have more than 10 windows to calculate the average fluctuation). • overlap (bool) – Defaults to True, where the windows will have a 50% overlap with each other, otherwise non-overlapping windows will be used. • integrate (bool) – It is common practice to convert the signal to a random walk (i.e., detrend and integrate, which corresponds to the signal ‘profile’). Note that it leads to the flattening of the signal, which can lead to the loss of some details (see Ihlen, 2012 for an explanation). Note that for strongly anticorrelated signals, this transformation should be applied two times (i.e., provide np.cumsum(signal - np.mean(signal)) instead of signal). • order (int) – The order of the polynoiam trend, 1 for the linear trend. • multifractal (bool) – If true, compute Multifractal Detrended Fluctuation Analysis (MFDFA), in which case the argument `q is taken into account. • q (list) – The sequence of fractal exponents when multifractal=True. Must be a sequence between -10 and 10 (nota that zero will be removed, since the code does

216 Chapter 7. Functions NeuroKit2, Release 0.0.39

not converge there). Setting q = 2 (default) gives a result close to a standard DFA. For instance, Ihlen (2012) usese `` q=[-5, -3, -1, 0, 1, 3, 5]``. • show (bool) – Visualise the trend between the window size and the fluctuations. Returns dfa (float) – The DFA coefficient.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=3, noise=0.05) >>> dfa1= nk.fractal_dfa(signal, show=True) >>> dfa1 >>> dfa2= nk.fractal_mfdfa(signal,q=np.arange(-3,4), show=True) >>> dfa2

References

• Ihlen, E. A. F. E. (2012). Introduction to multifractal detrended fluctuation analysis in Matlab. Frontiers in physiology, 3, 141. • Hardstone, R., Poil, S. S., Schiavone, G., Jansen, R., Nikulin, V. V., Mansvelder, H. D., & Linkenkaer- Hansen, K. (2012). Detrended fluctuation analysis: a scale-free view on neuronal oscillations. Frontiers in physiology, 3, 450. • nolds • Youtube introduction complexity_mse(signal, scale='default', dimension=2, r='default', composite=False, refined=False, fuzzy=False, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version. Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2).

7.13. Complexity 217 NeuroKit2, Release 0.0.39

• composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014) • fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments. Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992. complexity_optimize(signal, delay_max=100, delay_method='fraser1986', dimension_max=20, di- mension_method='afnn', r_method='maxApEn', show=False) Find optimal complexity parameters.

218 Chapter 7. Functions NeuroKit2, Release 0.0.39

Estimate optimal complexity parameters Dimension (m), Time Delay (tau) and tolerance ‘r’. Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay_max (int) – See complexity_delay(). • delay_method (str) – See complexity_delay(). • dimension_max (int) – See complexity_dimension(). • dimension_method (str) – See complexity_dimension(). • r_method (str) – See complexity_r(). • show (bool) – Defaults to False. Returns • optimal_dimension (int) – Optimal dimension. • optimal_delay (int) – Optimal time delay. See also: complexity_dimension(), complexity_delay(), complexity_r()

Examples

>>> import neurokit2 as nk >>> >>> # Artifical example >>> signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) >>> parameters= nk.complexity_optimize(signal, show=True) >>> parameters

References

• Gautama, T., Mandic, D. P., & Van Hulle, M. M. (2003, April). A differential entropy based method for determining the optimal embedding parameters of a signal. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). (Vol. 6, pp. VI-29). IEEE. • Camplani, M., & Cannas, B. (2009). The role of the embedding dimension and time delay in time series forecasting. IFAC Proceedings Volumes, 42(7), 316-320. • Rosenstein, M. T., Collins, J. J., & De Luca, C. J. (1994). Reconstruction expansion as a geometry-based framework for choosing proper delay times. Physica-Section D, 73(1), 82-98. • Cao, L. (1997). Practical method for determining the minimum embedding dimension of a scalar time series. Physica D: Nonlinear Phenomena, 110(1-2), 43-50. • Lu, S., Chen, X., Kanters, J. K., Solomon, I. C., & Chon, K. H. (2008). Automatic selection of the threshold value r for approximate entropy. IEEE Transactions on Biomedical Engineering, 55(8), 1966- 1972. complexity_plot(signal, delay_max=100, delay_method='fraser1986', dimension_max=20, dimen- sion_method='afnn', r_method='maxApEn', *, show=True) Find optimal complexity parameters. Estimate optimal complexity parameters Dimension (m), Time Delay (tau) and tolerance ‘r’.

7.13. Complexity 219 NeuroKit2, Release 0.0.39

Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay_max (int) – See complexity_delay(). • delay_method (str) – See complexity_delay(). • dimension_max (int) – See complexity_dimension(). • dimension_method (str) – See complexity_dimension(). • r_method (str) – See complexity_r(). • show (bool) – Defaults to False. Returns • optimal_dimension (int) – Optimal dimension. • optimal_delay (int) – Optimal time delay. See also: complexity_dimension(), complexity_delay(), complexity_r()

Examples

>>> import neurokit2 as nk >>> >>> # Artifical example >>> signal= nk.signal_simulate(duration=10, frequency=1, noise=0.01) >>> parameters= nk.complexity_optimize(signal, show=True) >>> parameters

References

• Gautama, T., Mandic, D. P., & Van Hulle, M. M. (2003, April). A differential entropy based method for determining the optimal embedding parameters of a signal. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). (Vol. 6, pp. VI-29). IEEE. • Camplani, M., & Cannas, B. (2009). The role of the embedding dimension and time delay in time series forecasting. IFAC Proceedings Volumes, 42(7), 316-320. • Rosenstein, M. T., Collins, J. J., & De Luca, C. J. (1994). Reconstruction expansion as a geometry-based framework for choosing proper delay times. Physica-Section D, 73(1), 82-98. • Cao, L. (1997). Practical method for determining the minimum embedding dimension of a scalar time series. Physica D: Nonlinear Phenomena, 110(1-2), 43-50. • Lu, S., Chen, X., Kanters, J. K., Solomon, I. C., & Chon, K. H. (2008). Automatic selection of the threshold value r for approximate entropy. IEEE Transactions on Biomedical Engineering, 55(8), 1966- 1972. complexity_r(signal, delay=None, dimension=None, method='maxApEn', show=False) Estimate optimal tolerance (similarity threshold) :Parameters: * signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.

220 Chapter 7. Functions NeuroKit2, Release 0.0.39

• delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In practice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • method (str) – If ‘maxApEn’, rmax where ApEn is max will be returned. If ‘traditional’, r = 0.2 * standard deviation of the signal will be returned. • show (bool) – If true and method is ‘maxApEn’, will plot the ApEn values for each value of r.

Returns float – The optimal r as a similarity threshold. It corresponds to the filtering level - max absolute difference between segments.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> delay= nk.complexity_delay(signal) >>> dimension= nk.complexity_dimension(signal, delay=delay) >>> r= nk.complexity_r(signal, delay, dimension) >>> r

References

• Lu, S., Chen, X., Kanters, J. K., Solomon, I. C., & Chon, K. H. (2008). Automatic selection of the threshold value r for approximate entropy. IEEE Transactions on Biomedical Engineering, 55(8), 1966- 1972. complexity_rcmse(signal, scale='default', dimension=2, r='default', composite=False, *, refined=True, fuzzy=False, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version. Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version.

7.13. Complexity 221 NeuroKit2, Release 0.0.39

• r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014) • fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments. Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992.

222 Chapter 7. Functions NeuroKit2, Release 0.0.39 complexity_sampen(signal, delay=1, dimension=2, r='default', **kwargs) Sample Entropy (SampEn) Python implementation of the sample entropy (SampEn) of a signal. This function can be called either via entropy_sample() or complexity_sampen(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • **kwargs (optional) – Other arguments. See also: entropy_shannon(), entropy_approximate(), entropy_fuzzy()

Returns float – The sample entropy as float value.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy= nk.entropy_sample(signal) >>> entropy complexity_se(signal) Shannon entropy (SE) Python implementation of Shannon entropy (SE). Entropy is a measure of unpredictability of the state, or equiv- alently, of its average information content. Shannon entropy (SE) is one of the first and most basic measure of entropy and a foundational concept of information theory. Shannon’s entropy quantifies the amount of informa- tion in a variable. This function can be called either via entropy_shannon() or complexity_se(). Parameters signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. Returns float – The Shannon entropy as float value. See also: entropy_approximate(), entropy_sample(), entropy_fuzzy()

7.13. Complexity 223 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy= nk.entropy_shannon(signal) >>> entropy

References

• pyEntropy `_ • EntroPy `_ • nolds `_ complexity_simulate(duration=10, sampling_rate=1000, method='ornstein', hurst_exponent=0.5, **kwargs) Simulate chaotic time series. Generates time series using the discrete approximation of the Mackey-Glass delay differential equation de- scribed by Grassberger & Procaccia (1983). Parameters • duration (int) – Desired length of duration (s). • sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second). • duration (int) – The desired length in samples. • method (str) – The method. can be ‘hurst’ for a (fractional) Ornstein–Uhlenbeck process or ‘mackeyglass’ to use the Mackey-Glass equation. • hurst_exponent (float) – Defaults to 0.5. • **kwargs – Other arguments. Returns array – Simulated complexity time series.

Examples

>>> import neurokit2 as nk >>> >>> signal1= nk.complexity_simulate(duration=30, sampling_rate=100, method=

˓→"ornstein") >>> signal2= nk.complexity_simulate(duration=30, sampling_rate=100, method=

˓→"mackeyglass") >>> nk.signal_plot([signal1, signal2])

Returnsx (array) – Array containing the time series.

entropy_approximate(signal, delay=1, dimension=2, r='default', corrected=False, **kwargs) Approximate entropy (ApEn) Python implementations of the approximate entropy (ApEn) and its corrected version (cApEn). Approximate entropy is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. The advantages of ApEn include lower computational demand (ApEn can be designed to work

224 Chapter 7. Functions NeuroKit2, Release 0.0.39

for small data samples (< 50 data points) and can be applied in real time) and less sensitive to noise. However, ApEn is heavily dependent on the record length and lacks relative consistency. This function can be called either via entropy_approximate() or complexity_apen(), and the cor- rected version via complexity_capen(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (similarity threshold). It corresponds to the filtering level - max absolute difference between segments. If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • corrected (bool) – If true, will compute corrected ApEn (cApEn), see Porta (2007). • **kwargs – Other arguments. See also: entropy_shannon(), entropy_sample(), entropy_fuzzy()

Returns float – The approximate entropy as float value.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_approximate(signal) >>> entropy1 >>> entropy2= nk.entropy_approximate(signal, corrected=True) >>> entropy2

References

• EntroPy `_ • Sabeti, M., Katebi, S., & Boostani, R. (2009). Entropy and complexity measures for EEG signal classifi- cation of schizophrenic and control participants. Artificial intelligence in medicine, 47(3), 263-274. • Shi, B., Zhang, Y., Yuan, C., Wang, S., & Li, P. (2017). Entropy analysis of short-term heartbeat interval time series during regular walking. Entropy, 19(10), 568. entropy_fuzzy(signal, delay=1, dimension=2, r='default', **kwargs) Fuzzy entropy (FuzzyEn) Python implementations of the fuzzy entropy (FuzzyEn) of a signal.

7.13. Complexity 225 NeuroKit2, Release 0.0.39

This function can be called either via entropy_fuzzy() or complexity_fuzzyen(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • **kwargs – Other arguments. Returns float – The fuzzy entropy as float value. See also: entropy_shannon(), entropy_approximate(), entropy_sample()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy= nk.entropy_fuzzy(signal) >>> entropy entropy_multiscale(signal, scale='default', dimension=2, r='default', composite=False, refined=False, fuzzy=False, show=False, **kwargs) Multiscale entropy (MSE) and its Composite (CMSE), Refined (RCMSE) or fuzzy version. Python implementations of the multiscale entropy (MSE), the composite multiscale entropy (CMSE), the refined composite multiscale entropy (RCMSE) or their fuzzy version (FuzzyMSE, FuzzyCMSE or FuzzyRCMSE). This function can be called either via entropy_multiscale() or complexity_mse(). Moreover, variants can be directly accessed via complexity_cmse(), complexity_rcmse()`, complexity_fuzzymse() and complexity_fuzzyrcmse(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • scale (str or int or list) – A list of scale factors used for coarse graining the time series. If ‘default’, will use range(len(signal) / (dimension + 10)) (see discus- sion here). If ‘max’, will use all scales until half the length of the signal. If an integer, will create a range until the specified int. • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version.

226 Chapter 7. Functions NeuroKit2, Release 0.0.39

• r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • composite (bool) – Returns the composite multiscale entropy (CMSE), more accurate than MSE. • refined (bool) – Returns the ‘refined’ composite MSE (RCMSE; Wu, 2014) • fuzzy (bool) – Returns the fuzzy (composite) multiscale entropy (FuzzyMSE, Fuzzy- CMSE or FuzzyRCMSE). • show (bool) – Show the entropy values for each scale factor. • **kwargs – Optional arguments. Returns float – The point-estimate of multiscale entropy (MSE) as a float value corresponding to the area under the MSE values curvee, which is essentially the sum of sample entropy values over the range of scale factors. See also: entropy_shannon(), entropy_approximate(), entropy_sample(), entropy_fuzzy()

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy1= nk.entropy_multiscale(signal, show=True) >>> entropy1 >>> entropy2= nk.entropy_multiscale(signal, show=True, composite=True) >>> entropy2 >>> entropy3= nk.entropy_multiscale(signal, show=True, refined=True) >>> entropy3

References

• pyEntropy `_ • Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039- H2049. • Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. • Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of- pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926- 7947. • Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17- 22. • Liu, Q., Wei, Q., Fan, S. Z., Lu, C. W., Lin, T. Y., Abbod, M. F., & Shieh, J. S. (2012). Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery. Entropy, 14(6), 978-992.

7.13. Complexity 227 NeuroKit2, Release 0.0.39 entropy_sample(signal, delay=1, dimension=2, r='default', **kwargs) Sample Entropy (SampEn) Python implementation of the sample entropy (SampEn) of a signal. This function can be called either via entropy_sample() or complexity_sampen(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (float) – Tolerance (i.e., filtering level - max absolute difference between segments). If ‘default’, will be set to 0.2 times the standard deviation of the signal (for dimension = 2). • **kwargs (optional) – Other arguments. See also: entropy_shannon(), entropy_approximate(), entropy_fuzzy()

Returns float – The sample entropy as float value.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy= nk.entropy_sample(signal) >>> entropy entropy_shannon(signal) Shannon entropy (SE) Python implementation of Shannon entropy (SE). Entropy is a measure of unpredictability of the state, or equiv- alently, of its average information content. Shannon entropy (SE) is one of the first and most basic measure of entropy and a foundational concept of information theory. Shannon’s entropy quantifies the amount of informa- tion in a variable. This function can be called either via entropy_shannon() or complexity_se(). Parameters signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. Returns float – The Shannon entropy as float value. See also: entropy_approximate(), entropy_sample(), entropy_fuzzy()

228 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> entropy= nk.entropy_shannon(signal) >>> entropy

References

• pyEntropy `_ • EntroPy `_ • nolds `_ fractal_correlation(signal, delay=1, dimension=2, r=64, show=False) Correlation Dimension. Python implementation of the Correlation Dimension D2 of a signal. This function can be called either via fractal_correlation() or complexity_d2(). Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • delay (int) – Time delay (often denoted ‘Tau’, sometimes referred to as ‘lag’). In prac- tice, it is common to have a fixed time lag (corresponding for instance to the sampling rate; Gautama, 2003), or to find a suitable value using some algorithmic heuristics (see delay_optimal()). • dimension (int) – Embedding dimension (often denoted ‘m’ or ‘d’, sometimes referred to as ‘order’). Typically 2 or 3. It corresponds to the number of compared runs of lagged data. If 2, the embedding returns an array with two columns corresponding to the original signal and its delayed (by Tau) version. • r (str or int or list) – The sequence of radiuses to test. If an integer is passed, will get an exponential sequence ranging from 2.5% to 50% of the distance range. Methods imple- mented in other packages can be used via setting r='nolds' or r='Corr_Dim'. • show (bool) – Plot of correlation dimension if True. Defaults to False. Returns D2 (float) – The correlation dimension D2.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=2, frequency=5) >>> >>> fractal1= nk.fractal_correlation(signal,r="nolds", show=True) >>> fractal1 >>> fractal2= nk.fractal_correlation(signal,r=32, show=True) >>> fractal2 >>> (continues on next page)

7.13. Complexity 229 NeuroKit2, Release 0.0.39

(continued from previous page) >>> signal= nk.rsp_simulate(duration=120, sampling_rate=50) >>> >>> fractal3= nk.fractal_correlation(signal,r="nolds", show=True) >>> fractal3 >>> fractal4= nk.fractal_correlation(signal,r=32, show=True) >>> fractal4

References

• Bolea, J., Laguna, P., Remartínez, J. M., Rovira, E., Navarro, A., & Bailón, R. (2014). Methodological framework for estimating the correlation dimension in HRV signals. Computational and mathematical methods in medicine, 2014. • Boon, M. Y., Henry, B. I., Suttle, C. M., & Dain, S. J. (2008). The correlation dimension: A useful objective measure of the transient visual evoked potential?. Journal of vision, 8(1), 6-6. • nolds • Corr_Dim fractal_dfa(signal, windows='default', overlap=True, integrate=True, order=1, multifractal=False, q=2, show=False) (Multifractal) Detrended Fluctuation Analysis (DFA or MFDFA) Python implementation of Detrended Fluctuation Analysis (DFA) or Multifractal DFA of a signal. Detrended fluctuation analysis, much like the Hurst exponent, is used to find long-term statistical dependencies in time series. This function can be called either via fractal_dfa() or complexity_dfa(), and its multifractal variant can be directly accessed via fractal_mfdfa() or complexity_mfdfa() Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • windows (list) – A list containing the lengths of the windows (number of data points in each subseries). Also referred to as ‘lag’ or ‘scale’. If ‘default’, will set it to a logarith- mic scale (so that each window scale hase the same weight) with a minimum of 4 and maximum of a tenth of the length (to have more than 10 windows to calculate the average fluctuation). • overlap (bool) – Defaults to True, where the windows will have a 50% overlap with each other, otherwise non-overlapping windows will be used. • integrate (bool) – It is common practice to convert the signal to a random walk (i.e., detrend and integrate, which corresponds to the signal ‘profile’). Note that it leads to the flattening of the signal, which can lead to the loss of some details (see Ihlen, 2012 for an explanation). Note that for strongly anticorrelated signals, this transformation should be applied two times (i.e., provide np.cumsum(signal - np.mean(signal)) instead of signal). • order (int) – The order of the polynoiam trend, 1 for the linear trend. • multifractal (bool) – If true, compute Multifractal Detrended Fluctuation Analysis (MFDFA), in which case the argument `q is taken into account. • q (list) – The sequence of fractal exponents when multifractal=True. Must be a sequence between -10 and 10 (nota that zero will be removed, since the code does

230 Chapter 7. Functions NeuroKit2, Release 0.0.39

not converge there). Setting q = 2 (default) gives a result close to a standard DFA. For instance, Ihlen (2012) usese `` q=[-5, -3, -1, 0, 1, 3, 5]``. • show (bool) – Visualise the trend between the window size and the fluctuations. Returns dfa (float) – The DFA coefficient.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=3, noise=0.05) >>> dfa1= nk.fractal_dfa(signal, show=True) >>> dfa1 >>> dfa2= nk.fractal_mfdfa(signal,q=np.arange(-3,4), show=True) >>> dfa2

References

• Ihlen, E. A. F. E. (2012). Introduction to multifractal detrended fluctuation analysis in Matlab. Frontiers in physiology, 3, 141. • Hardstone, R., Poil, S. S., Schiavone, G., Jansen, R., Nikulin, V. V., Mansvelder, H. D., & Linkenkaer- Hansen, K. (2012). Detrended fluctuation analysis: a scale-free view on neuronal oscillations. Frontiers in physiology, 3, 450. • nolds • Youtube introduction fractal_mandelbrot(size=1000, real_range=- 2, 2, imaginary_range=- 2, 2, threshold=4, itera- tions=25, buddha=False, show=False) Generate a Mandelbrot (or a ) fractal. Vectorized function to efficiently generate an array containing values corresponding to a Mandelbrot fractal. Parameters • size (int) – The size in pixels (corresponding to the width of the figure). • real_range (tuple) – The mandelbrot set is defined within the -2, 2 complex space (the real being the x-axis and the imaginary the y-axis). Adjusting these ranges can be used to pan, zoom and crop the figure. • imaginary_range (tuple) – The mandelbrot set is defined within the -2, 2 complex space (the real being the x-axis and the imaginary the y-axis). Adjusting these ranges can be used to pan, zoom and crop the figure. • iterations (int) – Number of iterations. • threshold (int) – The threshold used, increasing it will increase the sharpness (not used for buddhabrots). • buddha (bool) – Whether to return a buddhabrot. • show (bool) – Visualize the fratal. Returns fig – Plot of fractal.

7.13. Complexity 231 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> # Mandelbrot fractal >>> nk.fractal_mandelbrot(show=True) array(...)

>>> # Zoom at seahorse valley >>> nk.fractal_mandelbrot(real_range=(-0.76,-0.74), imaginary_range=(0.09, 0.11), ... iterations=100, show=True) array(...) >>> >>> # Draw manually >>> m= nk.fractal_mandelbrot(real_range=(-2, 0.75), imaginary_range=(-1.25, 1.

˓→25)) >>> plt.imshow(m.T, cmap="viridis") >>> plt.axis("off") >>> plt.show() >>> >>> # Buddhabrot >>> b= nk.fractal_mandelbrot(size=1500, real_range=(-2, 0.75), imaginary_range=(-

˓→1.25, 1.25), ... buddha=True, iterations=200) >>> plt.imshow(b.T, cmap="gray") >>> plt.axis("off") >>> plt.show() >>> >>> # Mixed >>> m= nk.fractal_mandelbrot() >>> b= nk.fractal_mandelbrot(buddha=True, iterations=200) >>> >>> mixed=m-b >>> plt.imshow(mixed.T, cmap="gray") >>> plt.axis("off") >>> plt.show() fractal_mfdfa(signal, windows='default', overlap=True, integrate=True, order=1, *, multifractal=True, q=2, show=False) (Multifractal) Detrended Fluctuation Analysis (DFA or MFDFA) Python implementation of Detrended Fluctuation Analysis (DFA) or Multifractal DFA of a signal. Detrended fluctuation analysis, much like the Hurst exponent, is used to find long-term statistical dependencies in time series. This function can be called either via fractal_dfa() or complexity_dfa(), and its multifractal variant can be directly accessed via fractal_mfdfa() or complexity_mfdfa() Parameters • signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values. • windows (list) – A list containing the lengths of the windows (number of data points in each subseries). Also referred to as ‘lag’ or ‘scale’. If ‘default’, will set it to a logarith- mic scale (so that each window scale hase the same weight) with a minimum of 4 and maximum of a tenth of the length (to have more than 10 windows to calculate the average fluctuation).

232 Chapter 7. Functions NeuroKit2, Release 0.0.39

• overlap (bool) – Defaults to True, where the windows will have a 50% overlap with each other, otherwise non-overlapping windows will be used. • integrate (bool) – It is common practice to convert the signal to a random walk (i.e., detrend and integrate, which corresponds to the signal ‘profile’). Note that it leads to the flattening of the signal, which can lead to the loss of some details (see Ihlen, 2012 for an explanation). Note that for strongly anticorrelated signals, this transformation should be applied two times (i.e., provide np.cumsum(signal - np.mean(signal)) instead of signal). • order (int) – The order of the polynoiam trend, 1 for the linear trend. • multifractal (bool) – If true, compute Multifractal Detrended Fluctuation Analysis (MFDFA), in which case the argument `q is taken into account. • q (list) – The sequence of fractal exponents when multifractal=True. Must be a sequence between -10 and 10 (nota that zero will be removed, since the code does not converge there). Setting q = 2 (default) gives a result close to a standard DFA. For instance, Ihlen (2012) usese `` q=[-5, -3, -1, 0, 1, 3, 5]``. • show (bool) – Visualise the trend between the window size and the fluctuations. Returns dfa (float) – The DFA coefficient.

Examples

>>> import neurokit2 as nk >>> >>> signal= nk.signal_simulate(duration=3, noise=0.05) >>> dfa1= nk.fractal_dfa(signal, show=True) >>> dfa1 >>> dfa2= nk.fractal_mfdfa(signal,q=np.arange(-3,4), show=True) >>> dfa2

References

• Ihlen, E. A. F. E. (2012). Introduction to multifractal detrended fluctuation analysis in Matlab. Frontiers in physiology, 3, 141. • Hardstone, R., Poil, S. S., Schiavone, G., Jansen, R., Nikulin, V. V., Mansvelder, H. D., & Linkenkaer- Hansen, K. (2012). Detrended fluctuation analysis: a scale-free view on neuronal oscillations. Frontiers in physiology, 3, 450. • nolds • Youtube introduction

7.13. Complexity 233 NeuroKit2, Release 0.0.39

7.14 Miscellaneous

Submodule for NeuroKit. bio_analyze(data, sampling_rate=1000, method='auto') Automated analysis of bio signals. Wrapper for other bio analyze functions of electrocardiography signals (ECG), respiration signals (RSP), elec- trodermal activity (EDA) and electromyography signals (EMG). Parameters • data (DataFrame) – The DataFrame containing all the processed signals, typically pro- duced by bio_process(), ecg_process(), rsp_process(), eda_process(), or emg_process(). • sampling_rate (int) – The sampling frequency of the signals (in Hz, i.e., sam- ples/second). Defaults to 1000. • method (str) – Can be one of ‘event-related’ for event-related analysis on epochs, or ‘interval-related’ for analysis on longer periods of data. Defaults to ‘auto’ where the right method will be chosen based on the mean duration of the data (‘event-related’ for duration under 10s). Returns DataFrame – DataFrame of the analyzed bio features. See docstrings of ecg_analyze(), rsp_analyze(), eda_analyze() and emg_analyze() for more details. Also returns Respiratory Sinus Arrhythmia features produced by ecg_rsa() if interval-related analysis is carried out. See also: ecg_analyze(), rsp_analyze(), eda_analyze(), emg_analyze()

Examples

>>> import neurokit2 as nk >>> >>> # Example 1: Event-related analysis >>> # Download data >>> data= nk.data("bio_eventrelated_100hz") >>> >>> # Process the data >>> df, info= nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], ... keep=data["Photosensor"], sampling_rate=100) >>> >>> # Build epochs >>> events= nk.events_find(data["Photosensor"], threshold_keep='below', ... event_conditions=["Negative","Neutral", ... "Neutral","Negative"]) >>> epochs= nk.epochs_create(df, events, sampling_rate=100, epochs_start=-0.1, ... epochs_end=1.9) >>> >>> # Analyze >>> nk.bio_analyze(epochs, sampling_rate=100) >>> >>> # Example 2: Interval-related analysis >>> # Download data >>> data= nk.data("bio_resting_5min_100hz") >>> >>> # Process the data (continues on next page)

234 Chapter 7. Functions NeuroKit2, Release 0.0.39

(continued from previous page) >>> df, info= nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], sampling_rate=100) >>> >>> # Analyze >>> nk.bio_analyze(df, sampling_rate=100) bio_process(ecg=None, rsp=None, eda=None, emg=None, keep=None, sampling_rate=1000) Automated processing of bio signals. Wrapper for other bio processing functions of electrocardiography signals (ECG), respiration signals (RSP), electrodermal activity (EDA) and electromyography signals (EMG). Parameters • data (DataFrame # pylint: disable=W0611) – The DataFrame containing all the respec- tive signals (e.g., ecg, rsp, Photosensor etc.). If provided, there is no need to fill in the other arguments denoting the channel inputs. Defaults to None. • ecg (Union[list, np.array, pd.Series]) – The raw ECG channel. • rsp (Union[list, np.array, pd.Series]) – The raw RSP channel (as measured, for instance, by a respiration belt). • eda (Union[list, np.array, pd.Series]) – The raw EDA channel. • emg (Union[list, np.array, pd.Series]) – The raw EMG channel. • keep (DataFrame) – Dataframe or channels to add by concatenation to the processed dataframe (for instance, the Photosensor channel). • sampling_rate (int) – The sampling frequency of the signals (in Hz, i.e., sam- ples/second). Defaults to 1000. Returns • bio_df (DataFrame) – DataFrames of the following processed bio features: - “ECG”: the raw signal, the cleaned signal, the heart rate, and the R peaks indexes. Also generated by ecg_process().- “RSP”: the raw signal, the cleaned signal, the rate, and the amplitude. Also generated by rsp_process().- “EDA”: the raw signal, the cleaned signal, the tonic component, the phasic component, indexes of the SCR onsets, peaks, amplitudes, and half-recovery times. Also generated by eda_process().- “EMG”: the raw signal, the cleaned signal, and amplitudes. Also generated by emg_process().- “RSA”: Respiratory Sinus Arrhythmia features generated by ecg_rsa(), if both ECG and RSP are provided. • bio_info (dict) – A dictionary containing the samples of peaks, troughs, amplitudes, on- sets, offsets, periods of activation, recovery times of the respective processed signals. See also: ecg_process(), rsp_process(), eda_process(), emg_process()

7.14. Miscellaneous 235 NeuroKit2, Release 0.0.39

Example

>>> import neurokit2 as nk >>> >>> ecg= nk.ecg_simulate(duration=30, sampling_rate=250) >>> rsp= nk.rsp_simulate(duration=30, sampling_rate=250) >>> eda= nk.eda_simulate(duration=30, sampling_rate=250, scr_number=3) >>> emg= nk.emg_simulate(duration=30, sampling_rate=250, burst_number=3) >>> >>> bio_df, bio_info= nk.bio_process(ecg=ecg, rsp=rsp, eda=eda, emg=emg,

˓→sampling_rate=250) >>> >>> # Visualize all signals >>> fig= nk.standardize(bio_df).plot(subplots=True) >>> fig

Submodule for NeuroKit. as_vector(x) Convert to vector.

Examples

>>> import neurokit2 as nk >>> >>> x= nk.as_vector(x=range(3)) >>> y= nk.as_vector(x=[0,1,2]) >>> z= nk.as_vector(x=np.array([0,1,2])) >>> z >>> >>> x= nk.as_vector(x=0) >>> x >>> >>> x= nk.as_vector(x=pd.Series([0,1,2])) >>> y= nk.as_vector(x=pd.DataFrame([0,1,2])) >>> y >>> expspace(start, stop, num=50, base=1) Exponential range. Creates a list of integer values of a given length from start to stop, spread by an exponential function. Parameters • start (int) – Minimum range values. • stop (int) – Maximum range values. • num (int) – Number of samples to generate. Default is 50. Must be non-negative. • base (float) – If 1, will use np.exp(), if 2 will use np.exp2(). Returns array – An array of integer values spread by the exponential function.

236 Chapter 7. Functions NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> nk.expspace(start=4, stop=100, num=10) array([ 4, 6, 8, 12, 17, 24, 34, 49, 70, 100]) find_closest(closest_to, list_to_search_in, direction='both', strictly=False, return_index=False) Find the closest number in the array from a given number x. Parameters • closest_to (float) – The target number(s) to find the closest of. • list_to_search_in (list) – The list of values to look in. • direction (str) – “both” for smaller or greater, “greater” for only greater numbers and “smaller” for the closest smaller. • strictly (bool) – False for stricly superior or inferior or True for including equal. • return_index (bool) – If True, will return the index of the closest value in the list. Returns closest (int) – The closest number in the array.

Example

>>> import neurokit2 as nk >>> >>> # Single number >>> x= nk.find_closest(1.8,[3,5,6,1,2]) >>> x >>> >>> y= nk.find_closest(1.8,[3,5,6,1,2], return_index=True) >>> y >>> >>> # Vectorized version >>> x= nk.find_closest([1.8, 3.6],[3,5,6,1,2]) >>> x find_consecutive(x) Find and group consecutive values in a list. Creates a list of integer values of a given length from start to stop, spread by an exponential function. Parametersx (list) – The list to look in. Returns list – A list of tuples corresponding to groups containing all the consecutive numbers.

Examples

>>> import neurokit2 as nk >>> >>> x=[2,3,4,5, 12, 13, 14, 15, 16, 17, 20] >>> nk.find_consecutive(x) [(2, 3, 4, 5), (12, 13, 14, 15, 16, 17), (20,)] listify(**kwargs) Transforms arguments into lists of the same length.

7.14. Miscellaneous 237 NeuroKit2, Release 0.0.39

Examples

>>> import neurokit2 as nk >>> >>> nk.listify(a=3,b=[3,5],c=[3]) {'a': [3, 3], 'b': [3, 5], 'c': [3, 3]}

238 Chapter 7. Functions CHAPTER EIGHT

BENCHMARKS

Contents:

8.1 Benchmarking of ECG Preprocessing Methods

. We’d like to publish this study, but unfortunately we currently don’t have the time. If you want to help to make it happen, please contact us!

8.1.1 Introduction

This work is a replication and extension of the work by Porr & Howell (2019), that compared the performance of different popular R-peak detectors.

8.1.2 Databases

Glasgow University Database

The GUDB Database (Howell & Porr, 2018) contains ECGs from 25 subjects. Each subject was recorded performing 5 different tasks for two minutes (sitting, doing a maths test on a tablet, walking on a treadmill, running on a treadmill, using a hand bike). The sampling rate is 250Hz for all the conditions. The script to download and format the database using the Python package by Bernd Porr can be found .

MIT-BIH Arrhythmia Database

The MIT-BIH Arrhythmia Database (MIT-Arrhythmia; Moody & Mark, 2001) contains 48 excerpts of 30-min of two-channel ambulatory ECG recordings sampled at 360Hz and 25 additional recordings from the same participants including common but clinically significant arrhythmias (denoted as the MIT-Arrhythmia-x database). The script to download and format the database using the can be found .

239 NeuroKit2, Release 0.0.39

MIT-BIH Normal Sinus Rhythm Database

This database includes 18 clean long-term ECG recordings of subjects. Due to memory limits, we only kept the second hour of recording of each participant. The script to download and format the database using the can be found .

Concanate them together import pandas as pd

# Load ECGs ecgs_gudb= pd.read_csv("../../data/gudb/ECGs.csv") ecgs_mit1= pd.read_csv("../../data/mit_arrhythmia/ECGs.csv") ecgs_mit2= pd.read_csv("../../data/mit_normal/ECGs.csv")

# Load True R-peaks location rpeaks_gudb= pd.read_csv("../../data/gudb/Rpeaks.csv") rpeaks_mit1= pd.read_csv("../../data/mit_arrhythmia/Rpeaks.csv") rpeaks_mit2= pd.read_csv("../../data/mit_normal/Rpeaks.csv")

# Concatenate ecgs= pd.concat([ecgs_gudb, ecgs_mit1, ecgs_mit2], ignore_index=True) rpeaks= pd.concat([rpeaks_gudb, rpeaks_mit1, rpeaks_mit2], ignore_index=True)

8.1.3 Study 1: Comparing Different R-Peaks Detection Algorithms

Procedure

Setup Functions import neurokit2 as nk def neurokit(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="neurokit") return info["ECG_R_Peaks"] def pantompkins1985(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method=

˓→"pantompkins1985") return info["ECG_R_Peaks"] def hamilton2002(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="hamilton2002

˓→") return info["ECG_R_Peaks"] def martinez2003(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="martinez2003

˓→") return info["ECG_R_Peaks"] def christov2004(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="christov2004 ˓→") (continues on next page)

240 Chapter 8. Benchmarks NeuroKit2, Release 0.0.39

(continued from previous page) return info["ECG_R_Peaks"] def gamboa2008(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="gamboa2008") return info["ECG_R_Peaks"] def elgendi2010(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="elgendi2010

˓→") return info["ECG_R_Peaks"] def engzeemod2012(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method=

˓→"engzeemod2012") return info["ECG_R_Peaks"] def kalidas2017(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="kalidas2017

˓→") return info["ECG_R_Peaks"] def rodrigues2020(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method=

˓→"rodrigues2020") return info["ECG_R_Peaks"]

Run the Benchmarking

Note: This takes a long time (several hours). results=[] for method in[neurokit, pantompkins1985, hamilton2002, martinez2003, christov2004, gamboa2008, elgendi2010, engzeemod2012, kalidas2017, rodrigues2020]: result= nk.benchmark_ecg_preprocessing(method, ecgs, rpeaks) result["Method"]= method.__name__ results.append(result) results= pd.concat(results).reset_index(drop=True) results.to_csv("data_detectors.csv", index=False)

Results library(tidyverse) library(easystats) library(lme4) data <- read.csv("data_detectors.csv", stringsAsFactors= FALSE) %>% mutate(Database= ifelse(str_detect(Database,"GUDB"), paste0(str_replace(Database,

˓→"GUDB_","GUDB ("),")"), Database), Method= fct_relevel(Method,"neurokit","pantompkins1985","hamilton2002",

˓→"martinez2003","christov2004","gamboa2008","elgendi2010","engzeemod2012",

˓→"kalidas2017","rodrigues2020"), Participant= paste0(Database, Participant)) (continues on next page)

8.1. Benchmarking of ECG Preprocessing Methods 241 NeuroKit2, Release 0.0.39

(continued from previous page) colors <-c("neurokit"="#E91E63","pantompkins1985"="#f44336","hamilton2002"="#FF5722

˓→","martinez2003"="#FF9800","christov2004"="#FFC107","gamboa2008"="#4CAF50",

˓→"elgendi2010"="#009688","engzeemod2012"="#2196F3","kalidas2017"="#3F51B5",

˓→"rodrigues2020"="#9C27B0")

Errors and bugs data %>% mutate(Error= case_when( Error =="index -1 is out of bounds for axis 0 with size 0"~"index -1 out of

˓→bounds", Error =="index 0 is out of bounds for axis 0 with size 0"~"index 0 out of

˓→bounds", TRUE~ Error)) %>% group_by(Database, Method) %>% mutate(n=n()) %>% group_by(Database, Method, Error) %>% summarise(Percentage=n()/ unique(n)) %>% ungroup() %>% mutate(Error= fct_relevel(Error,"None")) %>% ggplot(aes(x=Error,y=Percentage, fill=Method))+ geom_bar(stat="identity", position= position_dodge2(preserve="single"))+ facet_wrap(~Database, nrow=2)+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_fill_manual(values=colors)

Conclusion: It seems that gamboa2008 and martinez2003 are particularly prone to errors, especially in the case

242 Chapter 8. Benchmarks NeuroKit2, Release 0.0.39 of a noisy ECG signal. Aside from that, the other algorithms are quite resistant and bug-free. data <- filter(data, Error =="None") data <- filter(data,!is.na(Score))

Computation Time

Descriptive Statistics

# Normalize duration data <- data %>% mutate(Duration=(Duration)/(Recording_Length * Sampling_Rate)) data %>% ggplot(aes(x=Method,y=Duration, fill=Method))+ geom_jitter2(aes(color=Method, group=Database), size=3, alpha=0.2,

˓→position=position_jitterdodge())+ geom_boxplot(aes(alpha=Database), outlier.alpha=0)+ geom_hline(yintercept=0, linetype="dotted")+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_alpha_manual(values=seq(0,1, length.out=8))+ scale_color_manual(values=colors)+ scale_fill_manual(values=colors)+ scale_y_sqrt()+ ylab("Duration (seconds per data sample)")

8.1. Benchmarking of ECG Preprocessing Methods 243 NeuroKit2, Release 0.0.39

Statistical Modelling model <- lmer(Duration~ Method+(1|Database)+(1|Participant), data=data) means <- modelbased::estimate_means(model) arrange(means, Mean) ## Method Mean SE CI_low CI_high ## 1 gamboa2008 8.47e-06 9.34e-06 -9.98e-06 2.69e-05 ## 2 neurokit 2.08e-05 7.20e-06 6.43e-06 3.52e-05 ## 3 martinez2003 4.68e-05 8.00e-06 3.09e-05 6.27e-05 ## 4 kalidas2017 7.77e-05 7.19e-06 6.33e-05 9.20e-05 ## 5 rodrigues2020 1.17e-04 7.20e-06 1.03e-04 1.32e-04 ## 6 hamilton2002 1.51e-04 7.19e-06 1.36e-04 1.65e-04 ## 7 engzeemod2012 5.33e-04 7.24e-06 5.19e-04 5.47e-04 ## 8 pantompkins1985 5.96e-04 7.19e-06 5.82e-04 6.11e-04 ## 9 elgendi2010 9.26e-04 7.19e-06 9.12e-04 9.41e-04 ## 10 christov2004 1.32e-03 7.19e-06 1.31e-03 1.34e-03 means %>% ggplot(aes(x=Method,y=Mean, color=Method))+ geom_line(aes(group=1), size=1)+ geom_pointrange(aes(ymin=CI_low, ymax=CI_high), size=1)+ geom_hline(yintercept=0, linetype="dotted")+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_color_manual(values=colors)+ ylab("Duration (seconds per data sample)")

Conclusion: It seems that gamboa2008 and neurokit are the fastest methods, followed by martinez2003, kalidas2017, rodrigues2020 and hamilton2002. The other methods are then substantially slower.

244 Chapter 8. Benchmarks NeuroKit2, Release 0.0.39

Accuracy

Note: The accuracy is computed as the absolute distance from the original “true” R-peaks location. As such, the closest to zero, the better the accuracy.

Descriptive Statistics data <- data %>% mutate(Outlier= performance::check_outliers(Score, threshold= list(zscore= stats:

˓→:qnorm(p=1- 0.000001)))) %>% filter(Outlier ==0) data %>% ggplot(aes(x=Database,y=Score))+ geom_boxplot(aes(fill=Method), outlier.alpha=0, alpha=1)+ geom_jitter2(aes(color=Method, group=Method), size=3, alpha=0.2,

˓→position=position_jitterdodge())+ geom_hline(yintercept=0, linetype="dotted")+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_color_manual(values=colors)+ scale_fill_manual(values=colors)+ scale_y_sqrt()+ ylab("Amount of Error")

8.1. Benchmarking of ECG Preprocessing Methods 245 NeuroKit2, Release 0.0.39

Statistical Modelling model <- lmer(Score~ Method+(1|Database)+(1|Participant), data=data) means <- modelbased::estimate_means(model) arrange(means, abs(Mean)) ## Method Mean SE CI_low CI_high ## 1 neurokit 0.0143 0.00448 0.00424 0.0244 ## 2 kalidas2017 0.0151 0.00448 0.00500 0.0251 ## 3 christov2004 0.0152 0.00455 0.00507 0.0253 ## 4 rodrigues2020 0.0304 0.00449 0.02037 0.0405 ## 5 engzeemod2012 0.0320 0.00462 0.02180 0.0422 ## 6 martinez2003 0.0337 0.00507 0.02290 0.0445 ## 7 pantompkins1985 0.0338 0.00453 0.02364 0.0439 ## 8 hamilton2002 0.0387 0.00450 0.02865 0.0488 ## 9 elgendi2010 0.0541 0.00459 0.04397 0.0643 ## 10 gamboa2008 0.0928 0.00768 0.07750 0.1081 means %>% ggplot(aes(x=Method,y=Mean, color=Method))+ geom_line(aes(group=1), size=1)+ geom_pointrange(aes(ymin=CI_low, ymax=CI_high), size=1)+ geom_hline(yintercept=0, linetype="dotted")+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_color_manual(values=colors)+ ylab("Amount of Error")

Conclusion: It seems that neurokit, kalidas2017 and christov2004 the most accurate algo- rithms to detect R-peaks. This pattern of results differs a bit from Porr & Howell (2019) that outlines engzeemod2012, elgendi2010, kalidas2017 as the most accurate and christov2004, hamilton2002

246 Chapter 8. Benchmarks NeuroKit2, Release 0.0.39 and pantompkins1985 as the worse. Discrepancies could be due to the differences in data and analysis, as here we used more databases and modelled them by respecting their hierarchical structure using mixed models.

Conclusion

Based on the accuracy / execution time criterion, it seems like neurokit is the best R-peak detection method, followed by kalidas2017.

8.1.4 Study 2: Normalization

Procedure

Setup Functions import neurokit2 as nk def none(ecg, sampling_rate): signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="neurokit") return info["ECG_R_Peaks"] def mean_detrend(ecg, sampling_rate): ecg= nk.signal_detrend(ecg, order=0) signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="neurokit") return info["ECG_R_Peaks"] def standardize(ecg, sampling_rate): ecg= nk.standardize(ecg) signal, info= nk.ecg_peaks(ecg, sampling_rate=sampling_rate, method="neurokit") return info["ECG_R_Peaks"]

Run the Benchmarking

Note: This takes a long time (several hours). results=[] for method in[none, mean_detrend, standardize]: result= nk.benchmark_ecg_preprocessing(method, ecgs, rpeaks) result["Method"]= method.__name__ results.append(result) results= pd.concat(results).reset_index(drop=True) results.to_csv("data_normalization.csv", index=False)

8.1. Benchmarking of ECG Preprocessing Methods 247 NeuroKit2, Release 0.0.39

Results library(tidyverse) library(easystats) library(lme4) data <- read.csv("data_normalization.csv", stringsAsFactors= FALSE) %>% mutate(Database= ifelse(str_detect(Database,"GUDB"), paste0(str_replace(Database,

˓→"GUDB_","GUDB ("),")"), Database), Method= fct_relevel(Method,"none","mean_removal","standardization"), Participant= paste0(Database, Participant)) %>% filter(Error =="None") %>% filter(!is.na(Score)) colors <-c("none"="#607D8B","mean_removal"="#673AB7","standardization"="#00BCD4")

Accuracy

Descriptive Statistics data <- data %>% mutate(Outlier= performance::check_outliers(Score, threshold= list(zscore= stats:

˓→:qnorm(p=1- 0.000001)))) %>% filter(Outlier ==0) data %>% ggplot(aes(x=Database,y=Score))+ geom_boxplot(aes(fill=Method), outlier.alpha=0, alpha=1)+ geom_jitter2(aes(color=Method, group=Method), size=3, alpha=0.2,

˓→position=position_jitterdodge())+ geom_hline(yintercept=0, linetype="dotted")+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_color_manual(values=colors)+ scale_fill_manual(values=colors)+ scale_y_sqrt()+ ylab("Amount of Error")

248 Chapter 8. Benchmarks NeuroKit2, Release 0.0.39

Statistical Modelling model <- lmer(Score~ Method+(1|Database)+(1|Participant), data=data) modelbased::estimate_contrasts(model) ## Level1 | Level2 | Difference | SE | 95% CI |

˓→ t | df | p | Difference (std.) ## ------

˓→------## mean_removal | none | 1.59e-07 | 5.20e-07 | [-0.00e+00, 0.00e+00] | 0.

˓→31 | 370.00 | 0.759 | 1.64e-05 ## mean_removal | standardization | 7.75e-07 | 5.20e-07 | [-0.00e+00, 0.00e+00] | 1.

˓→49 | 370.00 | 0.410 | 7.99e-05 ## none | standardization | 6.15e-07 | 5.20e-07 | [-0.00e+00, 0.00e+00] | 1.

˓→18 | 370.00 | 0.474 | 6.34e-05 means <- modelbased::estimate_means(model) arrange(means, abs(Mean)) ## Method Mean SE CI_low CI_high ## 1 standardization 0.00803 0.00136 0.00482 0.0112 ## 2 none 0.00803 0.00136 0.00482 0.0112 ## 3 mean_removal 0.00803 0.00136 0.00482 0.0112 means %>% ggplot(aes(x=Method,y=Mean, color=Method))+ geom_line(aes(group=1), size=1)+ geom_pointrange(aes(ymin=CI_low, ymax=CI_high), size=1)+ theme_modern()+ theme(axis.text.x= element_text(angle= 45, hjust=1))+ scale_color_manual(values=colors)+ (continues on next page)

8.1. Benchmarking of ECG Preprocessing Methods 249 NeuroKit2, Release 0.0.39

(continued from previous page) ylab("Amount of Error")

Conclusion

No significant benefits added by normalization for the neurokit method.

8.2 References

250 Chapter 8. Benchmarks CHAPTER NINE

DATASETS

NeuroKit includes datasets that can be used for testing. These datasets are not downloaded automatically with the package (to avoid increasing its weight), but can be downloaded via the nk.data() function.

9.1 ECG (1000 hz)

Type Frequency Signals Single-subject 1000 Hz ECG

data= nk.data(dataset="ecg_1000hz")

9.2 ECG - pandas (3000 hz)

Type Frequency Signals Single-subject 3000 Hz ECG

data= nk.data(dataset="ecg_3000_pandas")

9.3 Event-related (4 events)

Type Frequency Signals Single-subject with events 100 Hz ECG, EDA, RSP, Photosensor

data= nk.data(dataset="bio_eventrelated_100hz")

• Used in the following docstrings: – bio_analyze() – ecg_analyze() – ecg_eventrelated() – ecg_rsa()

251 NeuroKit2, Release 0.0.39

– ecg_rsp() – eda_analyze() – eda_eventrelated() – eda_phasic() – epochs_create() – epochs_plot() – epochs_to_df() – rsp_analyze() – rsp_eventrelated() – signal_power() • Used in the following examples: – Event-related Analysis – Analyze Respiratory Rate Variability (RRV)

9.4 Resting state (5 min)

Type Frequency Signals Single-subject resting state 100 Hz ECG, PPG, RSP data= nk.data(dataset="bio_resting_5min_100hz")

• Used in the following docstrings: – bio_analyze() – ecg_analyze() – ecg_intervalrelated() – rsp_analyze() – rsp_intervalrelated() • Used in the following examples: – Interval-related Analysis

9.5 Resting state (8 min)

Type Frequency Signals Single-subject resting state 100 Hz ECG, RSP, EDA, Photosensor data= nk.data(dataset="bio_resting_8min_100hz")

• Used in the following docstrings:

252 Chapter 9. Datasets NeuroKit2, Release 0.0.39

– eda_analyze() – eda_intervalrelated()

9.5. Resting state (8 min) 253 NeuroKit2, Release 0.0.39

254 Chapter 9. Datasets CHAPTER TEN

CONTRIBUTING

We’re glad that you are considering joining the team and the community of open-science. You can find step-by-step guides below that will show you how to make a perfect first contribution. All people are very much welcome to contribute to code, documentation, testing and suggestions. This package aims at being beginner-friendly. Even if you’re new to this open-source way of life, new to coding and GitHub stuff, we encourage you to try submitting pull requests (PRs). • “I’d like to help, but I’m not good enough with programming yet” It’s alright, don’t worry! You can always dig in the code, in the documentation or tests. There are always some typos to fix, some docs to improve, some details to add, some code lines to document, some tests to add. . . Just explore the code structure, find where functions are located, where documentation is written, where tests are made, and see what you can fix. Even the smaller PRs are appreciated. • “I don’t know how to code at all :(“ You can still contribute to the documentation by creating tutorials, help and info! • “I’d like to help, but I don’t know where to start” You can look around the issue section to find some features / ideas / bugs to start working on. You can also open a new issue just to say that you’re there, interested in helping out. We might have some ideas adapted to your skills. • “I’m not sure if my suggestion or idea is worthwhile” Enough with the impostor syndrome! All suggestions and opinions are good, and even if it’s just a thought or so, it’s always good to receive feedback. • “Why should I waste my time with this? Do I get any credit?” Software contributions are getting more and more valued in the academic world, so it is a good time to collaborate with us! All contributors will be added within the authors list. We’re also very keen on including them to eventual academic publications. Anyway, starting is the most important! You will then enter a whole new world, a new fantastic point of view. . . So fork this repo, do some changes and submit them. We will then work together to make the best out of it Contributing Guides:

255 NeuroKit2, Release 0.0.39

10.1 Understanding NeuroKit

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing see our guides).

Let’s start by reviewing some basic coding principles that might help you get familiar with NeuroKit If you are reading this, it could be because you don’t feel comfortable enough with Python and NeuroKit (yet), and you impatiently want to get to know it in order to start looking at your data. “Tous les chemins mènent à Rome” (all roads lead to Rome) Let me start by saying that there are multiple ways you’ll be able to access the documentation in order to get to know different functions, follow examples and other tutorials. So keep in mind that you will eventually find your own workflow, and that these tricks are shared simply to help you get to know your options.

10.1.1 1. readthedocs

You probably already saw the README file that shows up on NeuroKit’s Github home page (right after the list of directories). It contains a brief overview of the project, some examples and figures. But, most importantly, there are the links that will take you to the Documentation. Documentation basically means code explanations, references and examples. In the Documentation section of the README, you’ll find links to the readthedocs website like this one :

Hint: Did you know that you can access the documentation website using the rtfd domain name https:// neurokit2.rtfd.io/, which stands for READ THE F**** DOCS

And a link to the API (or Application Program Interface, containing the list of functions) like this one:

All the info you will see on that webpage is rendered directly from the code, meaning that the website reads the code and generates a HTML page from it. That’s why it’s important to structure your code in a standard manner (You can learn how to contribute here). The API is organized by types of signals. You’ll find that each function has a description, and that most of them refer to peer-reviewed papers or other GitHub repositories. Also, for each function, parameters are described in order. Some of them will take many different options and all of them should be described as well. If the options are not explained, they should be. It’s not your fault you don’t understand. That’s why we need you to contribute.

256 Chapter 10. Contributing NeuroKit2, Release 0.0.39

Example

In the ECG section, the ecg_findpeaks function takes 4 parameters. One of them is method: each method refers to a peer-reviewed paper that published a peak detection algorithm. You can also see what the function returns and what type of data has been returned (integers and floating point numbers, strings, etc). Additionally, you can find related functions in the See also part. An small example of the function should also be found. You can copy paste it in your Python kernel, or in a Jupyter Notebook, to see what it does.

10.1.2 2. The code on Github

Now that you’re familiar with readthedocs website, let’s go back to the repo. What you have to keep in mind is that everything you saw in the previous section is in the Github repository. The website pages, the lines that you are currently reading, are stored in the repository, which is then automatically uploaded to the website. Everything is cross-referenced, everything relates to the core which can be found in the repo. If you got here, you probably already know that a repository is like a tree containing different branches or directories that eventually lead you to a script, in which you can find a function.

Example

Ready for inception ? let’s find the location of the file you’re currently reading. Go under docs and find it by yourself. . . it should be straight-forward.

Hint: As you can see, there are several sections (see the Table of Content on the left), and we are in the tutorials section. So you might want to look into the tutorials folder :)

See! It’s super handy because you can visit the scripts without downloading it. Github also renders Jupyter Notebook quite well, so you can not only see the script, but also figures and markdown sections where the coder discusses results.

10.1.3 3. The code on YOUR machine

Now, you’re probably telling yourself : If I want to use these functions, they should be somewhere on my computer! For that, I encourage you to visit the installation page if you didn’t already. Once Python is installed, its default pathway should be :

Python directory

Windows

• C:\Users\\anaconda3\ (if the directory doesn’t match, just search for the folder name anaconda3 or miniconda3.

10.1. Understanding NeuroKit 257 NeuroKit2, Release 0.0.39

Mac

• /Users//anaconda3 Or, if you’re using WinPython it should be in the folder of its installation (e.g., C:\Users\\ Desktop\WPy-3710\). Linux users should know that already

Environment and NeuroKit directory

NeuroKit, along with all the other packages, are located in the python directory in the site-package folder (itself in the Lib folder). It should be located under the environment where you installed it (if you didn’t do it already, set a computing environment. Otherwise, you can run into problems when running your code). The directory should look like this: • C:\Users\\anaconda3\envs\\lib\site-package\neurokit2 Or, if you’re using WinPython: • C:\Users\\Desktop\WPy-3710\python-3.7.1.amd64\Lib\site-package\ neurokit2

Example

Take the ECG again : From the specified directory, I can note that the different folders are arranged in the same way as in the readthedocs website. Let’s say I want to go back to the same function ecg_findpeaks(): I’d click on ecg folder, and from there I can see the source code for the function under ; ecg_findpeaks.py.

10.2 Contributing guide

NeuroKit2 welcomes everyone to contribute to code, documentation, testing and suggestions. This package aims at being beginner-friendly. And if you’re not yet familiar with how contribution can be done to open-source packages, or with how to use GitHub, this guide is for you! Let’s dive right into it!

10.2.1 NeuroKit’s style

Structure and code

• The NeuroKit package is organized into submodules, such as ecg, signal, statistics, etc. New functions should be created within at the appropriate places. • The API (the functions) should be consistent, with functions starting with a prefix (plot_, ecg_, eda_, etc.) so that the user can easily find them by typing the “intuitive” prefix. • Authors of code contribution are invited to follow the PEP 8 style sheet to write some nice (and readable) python. • That being said, human readability should always be favoured over anything else. Ideally, we would like the code in NeuroKit to be understandable even by non-programmers.

258 Chapter 10. Contributing NeuroKit2, Release 0.0.39

• Contrary to Python recommendations, we prefer some nicely nested loops, rather than complex one-liners [“that” for s if h in i for t in range(“don’t”) if “understand” is False]. • The maximum line length is 100 characters • Please document and comment your code, so that the purpose of each step (or code line) is stated in a clear and understandable way. • Don’t forget to add tests and documentation (a description, examples, etc.) to your functions.

Run code checks

Once you’re satisfied by the code you’ve written, you will need to run some checks to make sure it is “standardized”. You will need to open the command line and install the following packages: pip install isort black docformatter flake8 pylint

Now, navigate to the folder where your script is by typing cd C:\the\folder\of\my\file. Once you there, you can run the following commands: isort myfile.py-l 120--balanced--multi-line3--lines-between-types1--lines-

˓→after-imports2--trailing-comma black myfile.py--line-length 120 docformatter myfile.py--wrap-summaries 120--wrap-descriptions 113--blank--in-place flake8 myfile.py--max-line-length=127--max-complexity=10--ignore E303,C901,E203,

˓→W503 pylint myfile.py--max-line-length=127--load-plugins=pylint.extensions.docparams--

˓→load-plugins=pylint.extensions.docstyle--variable-naming-style=any--argument-

˓→naming-style=any--reports=n--suggestion-mode=y--disable=E303--disable=R0913--

˓→disable=R0801--disable=C0114--disable=E203--disable=E0401--disable=W9006--

˓→disable=C0330--disable=R0914--disable=R0912--disable=R0915--disable=W0102--

˓→disable=W0511--disable=C1801--disable=C0111--disable=R1705--disable=R1720--

˓→disable=C0301--disable=C0415--disable=C0103--disable=C0302--disable=R1716--

˓→disable=W0632--disable=E1136--extension-pkg-whitelist=numpy

The first three commands will make some modifications to your code so that it is nicely formatted, while the two last will run some checks to detect any additional issues. Please try to fix them! PS: If you want to check the whole package (i.e., all the files of the package), run: isort neurokit2-l 120--balanced--multi-line3--lines-between-types1--lines-

˓→after-imports2--trailing-comma--skip neurokit2/complexity/__init__.py--recursive black neurokit2--line-length 120 docformatter neurokit2--wrap-summaries 120--wrap-descriptions 113--blank--in-

˓→place--recursive flake8 neurokit2--exclude neurokit2/__init__.py--max-line-length=127--max-

˓→complexity=10--ignore E303,C901,E203,W503 pylint neurokit2--max-line-length=127--load-plugins=pylint.extensions.docparams--

˓→load-plugins=pylint.extensions.docstyle--variable-naming-style=any--argument-

˓→naming-style=any--reports=n--suggestion-mode=y--disable=E303--disable=R0913--

˓→disable=R0801--disable=C0114--disable=E203--disable=E0401--disable=W9006--

˓→disable=C0330--disable=R0914--disable=R0912--disable=R0915--disable=W0102--

˓→disable=W0511--disable=C1801--disable=C0111--disable=R1705--disable=R1720--

˓→disable=C0301--disable=C0415--disable=C0103--disable=C0302--disable=R1716--

˓→disable=W0632--disable=E1136--extension-pkg-whitelist=numpy--exit-zero

10.2. Contributing guide 259 NeuroKit2, Release 0.0.39

Avoid Semantic Errors

Most errors detected by our code checks can be easily automated with isort, black, and docformatter. This leaves us with the semantic errors picked up by pylint, the last style check, which often have to be fixed manually. Below is a list of the most common semantic errors that occur when writing code/documentation, so before you commit any changes, do make sure you have fixed these. Documentation • Missing function arguments in Parameters and Returns. • In internal functions, missing Returns section detected only if Parameters is documented but is not fol- lowed by returns documentation. • Failure to detect documentation of arguments when they are done simultaneously in one line:

a,b,c, discard,n, sampling_rate, x0: int

will result in a pylint error like a, b, c, discard, n, sampling_rate, x0" missing in parameter documentation (missing-param-doc) so do document each argument separately. • Argument name different from documentation Code • Unused arguments • Unused variables • Merge if arguments, for example: if isinstance(ecg, (list, pd.Series)) rather than if isinstance(ecg, list) or isinstance(ecg, pd.Series)

Development workflow

The NeuroKit GitHub repository has two main branches, master and the dev. The typical workflow is to work and make changes on the dev branch. This dev branch has a pull request (PR) opened to track individual commits (changes). And every now and then (when a sufficient number of changes have been made), the dev branch is merged into master, leading to an update of the version number and an upload to PyPi. The important thing is that you should not directly make changes on the master branch, because master is usually behind dev (which means for instance, maybe the the things you are changing on master have already been changed on dev). The master should be a stable, tested branch, and dev is the place to experiment. This is a summary of the typical workflow for contributing using GitHub (a detailed guide is available below): 1. Download GitHub Desktop and follow the small tutorial that it proposes 2. Fork the NeuroKit repository (this can be done on the GitHub website page by clicking on the Fork button), and clone it using GitHub Desktop to your local computer (it will copy over the whole repo from GitHub to your local machine) 3. In GitHub Desktop, switch to the dev branch. You are now on the dev branch (of your own fork) 4. From there, create a new branch, called for example “bugfix-functionX” or “feature-readEEG” or “typofix” 5. Make some changes and push them (this will update your fork) 6. Create a pull request (PR) from your fork to the “origin” (the original repo) dev branch 7. This will trigger automated checks that you can explore and fix 8. Wait for it to be merged into dev, and later see it being merged into master

260 Chapter 10. Contributing NeuroKit2, Release 0.0.39

10.2.2 How to use GitHub to contribute

Step 1: Fork it

A fork is a copy of a repository. Working with the fork allows you to freely experiment with changes without affecting the original project. Hit the Fork button in the top right corner of the page and in a few seconds, you will have a copy of the repository in your own GitHub account.

Now, that is the remote copy of the project. The next step is to make a local copy in your computer. While you can explore Git to manage your Github developments, we recommend downloading Github Desktop in- stead. It makes the process way easier and more straightforward.

Step 2: Clone it

Cloning allows you to make a local copy of any repositories on Github. Go to File menu, click Clone Repository and since you have forked Neurokit2, you should be able to find it easily under Your repositories.

10.2. Contributing guide 261 NeuroKit2, Release 0.0.39

Choose the local path of where you want to save your local copy and as simple as that, you have a working repository in your computer.

Step 3: Find it and fix it

And here is where the fun begins. You can start contributing by fixing a bug (or even a typo in the code) that has been annoying you. Or you can go to the issue section to hunt for issues that you can address. For example, here, as I tried to run the example in ecg_fixpeaks() file, I ran into a bug! A typo error!

Fix it and hit the save button! That’s one contribution I made to the package! To save the changes you made (e.g. the typo that was just fixed) to your local copy of the repository, the next step is to commit it.

262 Chapter 10. Contributing NeuroKit2, Release 0.0.39

Step 4: Commit it and push it

In your Github Desktop, you will now find the changes that you made highlighted in red (removed) or green (added). The first thing that you have to do is to switch from the default - Commit to Master to Commit to dev. Always commit to your dev branch as it is the branch with the latest changes. Then give the changes you made a good and succinct title and hit the Commit button.

Committing allows your changes to be saved in your local copy of the repository and in order to have the changes saved in your remote copy, you have to push the commit that you just made.

Step 4: Create pull request

The last step to make your contribution official is to create a pull request.

10.2. Contributing guide 263 NeuroKit2, Release 0.0.39

Go to your remote repository on Github page, the New Pull Request button is located right on top of the folders. Do remember to change your branch to dev since your commits were pushed to the dev branch previously. And now, all that is left is for the maintainers of the package to review your work and they can either request additional changes or merge it to the original repository.

Step 5: Let’s do it

Let’s do it for real! If you have a particular feature in mind that you would want to add, we would recommend first opening an issue to let us know, so we can eventually guide you and give you some advice. And if you don’t know where to start or what to do, then read our ideas for first contributions. Good luck

10.2.3 Useful reads

For instance, one way of starting to contribute could be to improve this file, fix typos, clarify things, add resources links etc. :) • Understanding the GitHub flow • How to create a Pull Request • Why and How to Contribute

10.2.4 What’s next?

• Ideas for first contributions

264 Chapter 10. Contributing NeuroKit2, Release 0.0.39

10.3 Ideas for first contributions

Hint: Spotted a typo? Would like to add something or make a correction? Join us by contributing (see our guides).

Now that you’re familiar with how to use GitHub, you’re ready to get your hands dirty and contribute to open-science? But you’re not sure where to start or what to do? We got you covered! In this guide, we will discuss the two best types of contributions for beginners, as they are easy to make, super useful and safe (you cannot break the package ).

10.3.1 Look for “good first contribution” issues

If you know how to code a bit, you can check out the issues that have been flagged as good for first contribution. This means that they are issue or features ideas that we believe are accessible to beginners. If you’re interested, do not hesitate to comment on these issues to know more, have more info or ask for guidance! We’ll be really happy to help in any way we can .

10.3.2 Improving documentation

One of the easiest thing is to improve, complete or fix the documentation for functions. For instance the ecg_simulate() function has a documentation with a general description, a description of the arguments, some example etc. As you’ve surely noticed, sometimes more details would be needed, some typos are present, or some references could be added. The documentation for functions is located alongside the function definition (the code of the function). If you’ve read understanding NeuroKit, you know that the code of the ecg_simulate() function is here. And as you can see, just below the function name, there is a big string (starting and ending with “””) containing the documentation. This thing is called the docstring. If you modify it here, then it will be updated automatically on the website!

10.3.3 Adding tests

Tests are super important for programmers to make sure that the changes that we make at one location don’t create unexpected changes at another place. Adding them is a good first issue for new contributors, as it takes little time, doesn’t require advanced programming skills and is a good occasion to discover functions and how they work. By clicking on the “coverage” badge under the logo on the README page, then on the “neurokit2” folder button at the bottom, you can see the breakdown of testing coverage for each submodules (folders), and if you click on one of them, the coverage for each individual file/function (example here). This percentage of coverage needs be improved The common approach is to identify functions, methods or arguments that are not tested, and then try to write a small test to cover them (i.e., a small self-contained piece of code that will run through a given portion of code and which output is tested (e.g., assert x == 3) and depends on the correct functioning of that code), and then add this test to the appropriate testing file. For instance, let’s imagine the following function:

10.3. Ideas for first contributions 265 NeuroKit2, Release 0.0.39

def domsfunction(x, method="great"): if method =="great": z=x+3 else: z=x+4 returnz

In order to test that function, I have to write some code that “runs through” it and put in a function which name starts with test_*, for instance: def test_domsfunction(): # Test default parameters output= domsfunction(1) assert output ==4

This will go through the function, which default method is “great”, therefore adds 3 to the input (here 1), and so the result should be 4. And the test makes sure that it is 4. However, we also need to add a second test to cover the other method of the function (when method != “great”), for instance: def test_domsfunction(): # Test default parameters output= domsfunction(1) assert output ==4

# Test other method output= domsfunction(1, method="whatever") assert isinstance(output, int)

I could have written assert output == 5, however, I decided instead to check the type of the output (whether it is an integer). That’s the thing with testing, it requires to be creative, but also in more complex cases, to be clever about what and how to test. But it’s an interesting challenge You can see examples of tests in the existing test files.

10.3.4 Adding examples and tutorials

How to write

The documentation that is on the website is automatically built by the hosting website, readthedocs, from reStructured Text (RST) files (a syntax similar to markdown) or from jupyter notebooks (.ipynb) Notebooks are preferred if your example contains code and images.

Where to add the files

These documentation files that we need to write are located in the /docs/ folder. For instance, if you want to add an example, you need to create a new file, for instance myexample.rst, in the docs/examples/ folder. If you want to add images to an .rst file, best is to put them in the /docs/img/ folder and to reference their link. However, in order for this file to be easily accessible from the website, you also need to add it to the table of content located in the index file (just add the name of the file without the extension). Do not hesitate to ask for more info by creating an issue!

266 Chapter 10. Contributing PYTHON MODULE INDEX

n neurokit2.bio, 234 neurokit2.complexity, 202 neurokit2.data, 190 neurokit2.ecg, 107 neurokit2.eda, 146 neurokit2.eeg, 163 neurokit2.emg, 157 neurokit2.epochs, 191 neurokit2.events, 187 neurokit2.hrv, 128 neurokit2.misc, 236 neurokit2.ppg, 124 neurokit2.rsp, 135 neurokit2.signal, 165 neurokit2.stats, 194

267 NeuroKit2, Release 0.0.39

268 Python Module Index INDEX

A complexity_rcmse() (in module neu- as_vector() (in module neurokit2.misc), 236 rokit2.complexity), 221 complexity_sampen() (in module neu- B rokit2.complexity), 222 complexity_se() in module neurokit2.complexity bio_analyze() (in module neurokit2.bio), 234 ( ), bio_process() (in module neurokit2.bio), 235 223 complexity_simulate() (in module neu- C rokit2.complexity), 224 cor() (in module neurokit2.stats), 194 complexity_apen() (in module neu- rokit2.complexity), 202 D complexity_capen() (in module neu- data() (in module neurokit2.data), 190 rokit2.complexity), 203 density() (in module neurokit2.stats), 194 complexity_cmse() (in module neu- distance() (in module neurokit2.stats), 195 rokit2.complexity), 204 complexity_d2() (in module neurokit2.complexity), E 205 complexity_delay() (in module neu- ecg_analyze() (in module neurokit2.ecg), 107 rokit2.complexity), 206 ecg_clean() (in module neurokit2.ecg), 108 complexity_dfa() (in module neu- ecg_delineate() (in module neurokit2.ecg), 109 rokit2.complexity), 208 ecg_eventrelated() (in module neurokit2.ecg), complexity_dimension() (in module neu- 110 rokit2.complexity), 209 ecg_findpeaks() (in module neurokit2.ecg), 111 complexity_embedding() (in module neu- ecg_intervalrelated() (in module neu- rokit2.complexity), 210 rokit2.ecg), 113 complexity_fuzzycmse() (in module neu- ecg_peaks() (in module neurokit2.ecg), 114 rokit2.complexity), 211 ecg_phase() (in module neurokit2.ecg), 115 complexity_fuzzyen() (in module neu- ecg_plot() (in module neurokit2.ecg), 116 rokit2.complexity), 212 ecg_process() (in module neurokit2.ecg), 116 complexity_fuzzymse() (in module neu- ecg_quality() (in module neurokit2.ecg), 118 rokit2.complexity), 213 ecg_rate() (in module neurokit2.ecg), 118 complexity_fuzzyrcmse() (in module neu- ecg_rsa() (in module neurokit2.ecg), 119 rokit2.complexity), 214 ecg_rsp() (in module neurokit2.ecg), 121 complexity_mfdfa() (in module neu- ecg_segment() (in module neurokit2.ecg), 122 rokit2.complexity), 216 ecg_simulate() (in module neurokit2.ecg), 123 complexity_mse() (in module neu- eda_analyze() (in module neurokit2.eda), 146 rokit2.complexity), 217 eda_autocor() (in module neurokit2.eda), 147 complexity_optimize() (in module neu- eda_changepoints() (in module neurokit2.eda), rokit2.complexity), 218 147 complexity_plot() (in module neu- eda_clean() (in module neurokit2.eda), 148 rokit2.complexity), 219 eda_eventrelated() (in module neurokit2.eda), complexity_r() (in module neurokit2.complexity), 148 220 eda_findpeaks() (in module neurokit2.eda), 150

269 NeuroKit2, Release 0.0.39 eda_fixpeaks() (in module neurokit2.eda), 151 fractal_dfa() (in module neurokit2.complexity), eda_intervalrelated() (in module neu- 230 rokit2.eda), 151 fractal_mandelbrot() (in module neu- eda_peaks() (in module neurokit2.eda), 152 rokit2.complexity), 231 eda_phasic() (in module neurokit2.eda), 153 fractal_mfdfa() (in module neurokit2.complexity), eda_plot() (in module neurokit2.eda), 154 232 eda_process() (in module neurokit2.eda), 155 eda_simulate() (in module neurokit2.eda), 156 H emg_activation() (in module neurokit2.emg), 157 hdi() (in module neurokit2.stats), 198 emg_amplitude() (in module neurokit2.emg), 158 hrv() (in module neurokit2.hrv), 128 emg_analyze() (in module neurokit2.emg), 158 hrv_frequency() (in module neurokit2.hrv), 129 emg_clean() (in module neurokit2.emg), 159 hrv_nonlinear() (in module neurokit2.hrv), 130 emg_eventrelated() (in module neurokit2.emg), hrv_time() (in module neurokit2.hrv), 133 160 emg_intervalrelated() (in module neu- L rokit2.emg), 160 listify() (in module neurokit2.misc), 237 emg_plot() (in module neurokit2.emg), 161 emg_process() (in module neurokit2.emg), 161 M emg_simulate() (in module neurokit2.emg), 162 mad() (in module neurokit2.stats), 199 entropy_approximate() (in module neu- mne_channel_add() (in module neurokit2.eeg), 163 rokit2.complexity), 224 mne_channel_extract() (in module neu- entropy_fuzzy() (in module neurokit2.complexity), rokit2.eeg), 164 225 module entropy_multiscale() (in module neu- neurokit2.bio, 234 rokit2.complexity), 226 neurokit2.complexity, 202 entropy_sample() (in module neu- neurokit2.data, 190 rokit2.complexity), 227 neurokit2.ecg, 107 entropy_shannon() (in module neu- neurokit2.eda, 146 rokit2.complexity), 228 neurokit2.eeg, 163 epochs_create() (in module neurokit2.epochs), 191 neurokit2.emg, 157 epochs_plot() (in module neurokit2.epochs), 192 neurokit2.epochs, 191 epochs_to_array() (in module neurokit2.epochs), neurokit2.events, 187 193 neurokit2.hrv, 128 epochs_to_df() (in module neurokit2.epochs), 193 neurokit2.misc, 236 events_find() (in module neurokit2.events), 187 neurokit2.ppg, 124 events_plot() (in module neurokit2.events), 188 neurokit2.rsp, 135 events_to_mne() (in module neurokit2.events), 189 neurokit2.signal, 165 expspace() (in module neurokit2.misc), 236 neurokit2.stats, 194 mutual_information() (in module neu- F rokit2.stats), 200 find_closest() (in module neurokit2.misc), 237 find_consecutive() (in module neurokit2.misc), N 237 neurokit2.bio fit_error() (in module neurokit2.stats), 195 module, 234 fit_loess() (in module neurokit2.stats), 196 neurokit2.complexity fit_mixture() (in module neurokit2.stats), 197 module, 202 fit_mse() (in module neurokit2.stats), 197 neurokit2.data fit_polynomial() (in module neurokit2.stats), 197 module, 190 fit_polynomial_findorder() (in module neu- neurokit2.ecg rokit2.stats), 198 module, 107 fit_r2() (in module neurokit2.stats), 198 neurokit2.eda fit_rmse() (in module neurokit2.stats), 198 module, 146 fractal_correlation() (in module neu- neurokit2.eeg rokit2.complexity), 229 module, 163

270 Index NeuroKit2, Release 0.0.39

neurokit2.emg signal_changepoints() (in module neu- module, 157 rokit2.signal), 166 neurokit2.epochs signal_decompose() (in module neurokit2.signal), module, 191 166 neurokit2.events signal_detrend() (in module neurokit2.signal), module, 187 167 neurokit2.hrv signal_distort() (in module neurokit2.signal), module, 128 169 neurokit2.misc signal_filter() (in module neurokit2.signal), 170 module, 236 signal_findpeaks() (in module neurokit2.signal), neurokit2.ppg 172 module, 124 signal_fixpeaks() (in module neurokit2.signal), neurokit2.rsp 173 module, 135 signal_formatpeaks() (in module neu- neurokit2.signal rokit2.signal), 175 module, 165 signal_interpolate() (in module neu- neurokit2.stats rokit2.signal), 175 module, 194 signal_merge() (in module neurokit2.signal), 176 signal_period() (in module neurokit2.signal), 176 P signal_phase() (in module neurokit2.signal), 177 ppg_clean() (in module neurokit2.ppg), 124 signal_plot() (in module neurokit2.signal), 178 ppg_findpeaks() (in module neurokit2.ppg), 124 signal_power() (in module neurokit2.signal), 178 ppg_plot() (in module neurokit2.ppg), 125 signal_psd() (in module neurokit2.signal), 179 ppg_process() (in module neurokit2.ppg), 126 signal_rate() (in module neurokit2.signal), 180 ppg_rate() (in module neurokit2.ppg), 126 signal_recompose() (in module neurokit2.signal), ppg_simulate() (in module neurokit2.ppg), 127 181 signal_resample() (in module neurokit2.signal), R 182 read_acqknowledge() (in module neurokit2.data), signal_simulate() (in module neurokit2.signal), 190 184 rescale() (in module neurokit2.stats), 200 signal_smooth() (in module neurokit2.signal), 184 rsp_amplitude() (in module neurokit2.rsp), 135 signal_synchrony() (in module neurokit2.signal), rsp_analyze() (in module neurokit2.rsp), 135 185 rsp_clean() (in module neurokit2.rsp), 136 signal_zerocrossings() (in module neu- rsp_eventrelated() (in module neurokit2.rsp), rokit2.signal), 186 137 standardize() (in module neurokit2.stats), 201 rsp_findpeaks() (in module neurokit2.rsp), 138 summary_plot() (in module neurokit2.stats), 201 rsp_fixpeaks() (in module neurokit2.rsp), 139 rsp_intervalrelated() (in module neu- rokit2.rsp), 139 rsp_peaks() (in module neurokit2.rsp), 140 rsp_phase() (in module neurokit2.rsp), 141 rsp_plot() (in module neurokit2.rsp), 142 rsp_process() (in module neurokit2.rsp), 142 rsp_rate() (in module neurokit2.rsp), 143 rsp_rrv() (in module neurokit2.rsp), 144 rsp_simulate() (in module neurokit2.rsp), 145 S signal_autocor() (in module neurokit2.signal), 165 signal_binarize() (in module neurokit2.signal), 165

Index 271