DD143X EXAMENSARBETE INOM DATALOGI, GRUNDNIVÅ, 15 HP STOCKHOLM, SVERIGE 2016

Heatmap Visualization of Neural Frequency Data Visualisering av neural frekvensdata som värmekarta

RODRIGO ROA RODRÍGUEZ

ROBERT LUNDIN

Supervisor: Alexander Kozlov Examiner: Örjan Ekeberg

KTH SKOLAN FÖR DATAVETENSKAP OCH KOMMUNIKATION Abstract

Complex spatial relationships and patterns in multivariate data are relatively simple to identify visually but dicult to detect computation- ally. For this reason, Anivis, an interactive tool for visual exploration of multivariate quantitative pure serial periodic data was developed. The data has four dimensions depth, laterality, frequency and time. The data was visualized as an animated heatmap, by mapping depth and laterality to coordinates in a pixel grid and frequency to color. Transfer functions were devised to map a single variable to color through parametric curves. Anivis implemented heatmap generation through both weighted sum and deconvolution for comparison reasons. Deconvolution exhibited a to have better theoretical and practical performance. In addition to the heatmap visualization a scatter- was added in order to visualize the causal relationships between data points and high value areas in the heatmap visualization. Performance and applicability of the tool were tested and verified on experimental data from the Karolinksa Institute’s Department of Neuroscience. Abstrakt

Komplexa spatiala m¨onster och f¨orh˚allanden i multivariat data ¨ar rel- ativt sv˚ara att identifiera via ber¨akningar men simpla att identifiera vi- suellt. Att visualisera data f¨or denna typ av data-analys anv¨ands ofta i m˚anga olika typer av f¨alt. Detta motiverade utvecklingen av Anivis; ett interaktivt verktyg f¨or visuell utforskning av multivariat kvantitativ data av neural aktivitet. Anivis anv¨ander sig av dataset baserade p˚aexperi- mentell data fr˚an en forskningsgrupp p˚aKarolinska Institutets Institution f¨or Neurovetenskap. Dessa fyrdimensionella dataset best˚ar av m¨atningar fr˚an neuroner i form av deras position, aktivitet i form av frekvens och tidpunkt. Denna data anv¨ands f¨or att generera en animerad heatmap, d¨ar neuroners frekvensv¨arden visas i form av f¨arg. Frekvensv¨ardena om- vandlades till f¨argv¨arden via ¨overg˚angsfunktioner som kopplar numeriska v¨arden till f¨argv¨arden via parametriserade kurvor. Anivis lyckades imple- mentera tv˚aolika metoder f¨or att generera heatmap, viktade summor och dekonvolution. Dessa tv˚ametoder j¨amf¨ordes med varandra, varav dekon- volution visade sig vara den teoretiskt och praktiskt e↵ektivaste meto- den. Utvecklingen av Anivis visade ¨aven behovet f¨or ett punktdiagram f¨or att visualisera f¨orh˚allandet mellan m¨atta frekvensv¨arden och spatial frekvensf¨ordelning i heatmappen. Contents

1 Introduction 1

1.1 Problemstatement ...... 2

1.2 Scope and limitations ...... 2

1.3 Disposition ...... 3

2 Background 4

2.1 Neurons...... 4

2.2 Reflexes ...... 4

2.3 Previousstudies ...... 4

2.4 Visualization ...... 6

2.5 Color Spaces ...... 7

2.5.1 Monochrome ...... 8

2.5.2 RGB...... 8

2.5.3 HSL ...... 9

2.5.4 HCL ...... 10

2.6 Color transfer functions ...... 11

2.7 Heatmaps ...... 12

2.8 Kernelimagefiltering ...... 13

3 Method 14

3.1 Datastructure ...... 14

3.2 Visual mappings ...... 15

3.2.1 Mapping depth and laterality to on screen coordinates . . 15 3.2.2 Mapping frequency to color ...... 15

3.3 Heatmap calculation ...... 16

3.4 Animation ...... 17

3.5 Interaction ...... 17

4 Results 18

4.1 Tools and design choices ...... 18

4.2 Scatter-plot implementation ...... 19

4.3 Heatmap implementation ...... 20

4.3.1 Weightedsum...... 20

4.3.2 Deconvolution ...... 21

4.4 Color transfer functions ...... 23

4.4.1 Grayscale ...... 23

4.4.2 HSL Rainbow ...... 24

4.4.3 Cubehelix...... 25

4.4.4 HCL Heat ...... 26

4.5 Rendering ...... 27

4.6 Animation ...... 28

4.6.1 Animation loop ...... 28

4.6.2 Frame rate capping ...... 28

4.7 Interactivity...... 29

5 Discussion 30

5.1 Heatmaps in Anivis ...... 30 5.2 Image quality of deconvolution ...... 31

5.3 Scatter-plot as a complement to heatmap visualization . . . . . 32

5.4 Futureresearch ...... 32

6 Conclusion 33 1 Introduction

In neuroscience, the study of reflex related neural activity seeks to correlate activity patterns with the di↵erent reflexes. Neural activity consists of oscilla- tions made of individual neuron spikes. Experiments by Zelenin, Hsu, Lyalka, Orlovsky, and Deliagina (2014) suggest that the neuron activity overarchingly matches motor response. This makes it desirable to explore the data in terms of spatial and temporal trends rather than numerical values of individual neurons. Stimuli induced patterns of neural activity must be isolated from background activity (Zelenin et al., 2014). Heatmaps could be suitable for this purpose since complex spatial relationships and patterns in multivariate data are rel- atively simple to identify visually but dicult to detect computationally, and particularly in the case of interactive visualizations (Rheingans, 1992).

Heatmaps are a popular visual structure for encoding quantitative intensity values spatially as color (Duchowski, Price, Meyer, & Orero, 2012). While originally developed to illustrate financial market information, the visual struc- ture has gained wide adoption in biotechnology and medicine (Akers, 2015). This might be because heatmaps produce similar visualizations to those re- sulting from well-established imaging techniques such as computed tomogra- phy(CT) scans, where color transfer functions can be used to di↵erentiate be- tween anatomical components such as bone, cartilage, muscle and blood vessels by means of mapping densities to di↵erent colors (for an example see Kindlmann et al., 2005). In regards of brain activity, heatmaps produce similar visualiza- tions to positron emission tomography(PET) which assesses neural activity indi- rectly by measuring blood flow or other metabolic processes in di↵erent regions of the brain (Handbook of Laboratory and Diagnostic Tests, 2013).

Figure 1: (A) A 3D reconstruction of the brain and eyes from a CT scan, (B) A transaxial slice of the brain from a PET scan and (C) a heatmap visualization of neural frequency data. Sources: (A) By Dale Mahalko - Own work, CC BY-SA 3.0. (B) By Jens Maus - Own work, Public Domain. (C) By Pavel Zelenin - Own work, all rights reserved.

Although static visualizations of neural activity data as heatmaps have previ-

1 ously been employed by Zelenin et al. (2014), an exploratory search of current literature in neuroscience failed to reveal any similar attempts by other re- searchers. This visualization is denominated as static because Zelenin et al. (2014) has created individual images rather than an animation or an interac- tive program. While animated visualizations of neural activity were found in current literature, these come from imaging living specimens and do not rep- resent frequency(e.g. Fetcho & O’Malley, 1995; Muto, Ohkura, Abe, Nakai, & Kawakami, 2013). The only interactive visualization of neuron frequency data that was found in literature was an staple diagram that was not animated, did not include spatial information and only presented deviation from mean frequency by individual neurons (Carlis & Konstan, 1998).

This project aims to develop spatiotemporal visualization of neural frequency data in the form an interactive animated heatmap. The purpose of this visual- ization is to be used as a tool in data analysis, assisting in pattern recognition and hypothesis formulation.

1.1 Problem statement

Complex spatial relationships and patterns in multivariate data are dicult to detect computationally but relatively simple to identify visually, so the goal of this project will be to visually represent multivariate neural frequency data as an interactive animated heatmap.

1.2 Scope and limitations

The method of visualization will be theorized first and then implemented as an application. The implementation will be assessed in terms of time and memory complexity and image quality. Furthermore, optimizations and heuristics will be attempted to improve their performance.

This project will not analyze or investigate the characteristics of activity dis- tribution between neurons. Although the data originates in a study in neuro- science, this project does not intend to conduct any further research in that field. The dataset’s significance to this project is limited to the numerical val- ues recorded from the experiments. What is of interest in this case is not the interpretation of the values nor their medical and biological connotations, but the structure of the data from a computer science perspective. This project will also not analyze how to best visually represent or highlight distribution trends across neurons nor how the visualization impacts data analysis. Instead, this report will discuss how the data is processed in order to visualize it as a heatmap.

2 1.3 Disposition

This report will document the findings from the development of the Anivis program.

Section 2 Background will provide underlying the theory in both neuroscience and on which Anivis is built upon. It will also present the neurological studies behind the neural frequency data.

Section 3 Method will describe in detail the structure of the neural data as well as the theoretical reasoning behind the implementation of Anivis.

Section 4 Results presents the resulting implementation of the theory and functionality described in section 3, as well as any insights gained from the implementation process.

Section 5 Discussion will overweight from a critical standpoint the chosen methods of implementation from section 4 as well consider possible alternative solutions and future research.

Section 6 Conclusion will present a general assessment of the report based on section 3, 4 and 5.

3 2 Background

This section will present the underlying theory in neuroscience and visualization. An introduction in neuron and reflex theory is followed by previous studies in neuroscience that studied postural limb reflex induced neuron activity. This sec- tion will also explain the need for information visualization, introduce heatmaps as a visual structure for representation of data and how transfer functions con- vert quantitative values to color.

2.1 Neurons

Neurons are cells that transfer information throughout the body by the means of electrical or chemical impulses. Information is exchanged between neurons via interconnected neural networks. Neural activity consists of oscillations where neurons spike individually (Goldstein, 2014).

2.2 Reflexes

Reflexes are unconsciously triggered bodily movements or actions built in to the nervous system. It has been shown that some reflexes still function in bodies devoid of consciousness or brain activity, mainly those located in the spine (Spittler, Wortmann, von During, & Gehlen, 2000). Futhermore, some studies have suggested that spinal nerves can generate neural activity to support postural activities without input from the cortex (Honeycutt, Gottschall, & Nichols, 2009)

2.3 Previous studies

This project builds upon earlier research on neural activity in relation to postu- ral reflexes, specifically on the experiments realized by Hsu, Zelenin, Orlovsky, and Deliagina (2012) and Zelenin et al. (2014).

Hsu et al. (2012) studied the neuron activity in relation to activation of postural limb reflexes (PLR). The study was based on measurements of firing frequencies of neurons in a gray matter section in the spinal cord of decerebrated quadrupeds subjected to galvanic vestibular stimulation. The activation of PLR evoked neu- ral activity which were recorded through electrodes implanted in the gray matter area of a cross-section of the spinal cord. With its forelimbs suspended in a ham- mock, the decerebrated quadruped’s hindlimbs where positioned in a hemiflexed

4 stance on a horizontal platform. The platform consisted of a left and a right part, both of which were capable of rotation; producing the hind limbs to tilt accordingly. Test recording consisted of the readings from tilting the platform in a angle and then returning to the the starting position, producing a movement cycle. The tilt in the platform resulted in flexion-extension movements of the hips and hind limbs, which evoked PLRs.

The neuron activity are measured by the frequency of impulse transmissions flowing trough the neuron, also called firing frequency. Each recording was divided into 12 bins by grouping by the movement performed during the test. For each bin, the neuron firing frequencies were calculated and averaged over all identical bins from all movements cycles. The bins were then grouped into flexion and extension bins. The firing frequency for each neuron was calculated by averaging each group and then taking the mean of the flexion and extension bins.

From these measurements, the authors grouped the neurons into two groups based on their firing frequency during limb extension and flexion, named E- and F-neurons respectively. The authors also stated that the findings indicated that the spinal neurons are contributing to the generation of PLRs.

Zelenin et al. (2014), conducted a follow-up study to (Hsu et al., 2012) with similar experimental design, frequency calculations as well as grouping by E- and F-neuron classification. (Zelenin et al., 2014) aimed to study and reveal distribution trends of neurons across the gray matter in the spinal cord based PLR-related activity and additionally tried to estimate the PLR-related neurons activity to the sensory input from the limbs of the body.

To reveal spatial trends in neural activity a heatmap was generated based on means of frequencies of each neuron. Local means for each point in space were calculated through weighted average (eq.1).

n wifi f¯ = i=1 (1) Pn wi i=1 P To interpolate in between neurons, the frequency should decay in relationship to distance to the recorded neuron. Frequencies were treated as if normally distributed in space, so Gaussian weights were utilized (eq.2).

d2 w(d)=e D2 (2) Where d is the distance to a recorded neuron and D is the standard deviation which was arbitrarily set to 0.4 by Zelenin et al. (2014). The local frequency means were then mapped to color by a transfer function, using MATLAB’s jet color map.

5 The results by Zelenin et al. (2014) suggest that neuron-wise activity matches motor response and provides further evidence for spinal neurons contributing to the generation of PLRs. Furthermore they suggested an expansion of each of the previous E- and F-neurons groupings into four di↵erent groups based on the pattern of activity during the extension and flexing motions. These results motivate the interest in motor control research towards the analysis of spatiotemporal patterns of activity in neural populations within the spinal cord.

2.4 Visualization

Visualization can be defined as visual representations of data that amplify hu- man cognition (Card, Mackinlay, & Shneiderman, 1999). Besides a mean for conveying both abstract and concrete ideas, visualization can also be used to elu- cidate the nature of the data itself. More specifically, information visualization refers to visualizations that represent data visually in order to take advantage of perceptual qualities of human vision such as parallel processing, grouping, abstraction and aggregation (Tory & Moller, 2004).

It could be argued that visualization is unnecessary since statistical analysis of data is sucient to find underlying trends in the data and its distribution. However, in Graphs in Statistical Analysis, Anscombe (1973) demonstrated the importance of visualizing data by showing that numerical data alone could be misleading. This was exemplified through four datasets known as Ascombe’s Quartet (see table 2.4).

IIIIIIIV x1 y1 x2 y2 x3 y3 x4 y4 10 8.04 10 9.14 10 7.46 8 6.58 8 6.95 8 8.14 8 6.77 8 5.76 13 7.58 13 8.74 13 12.74 8 7.71 9 8.81 9 8.77 9 7.11 8 8.84 11 8.33 11 9.26 11 7.81 8 8.47 14 9.96 14 8.1 14 8.84 8 7.04 6 7.24 6 6.13 6 6.08 8 5.25 4 4.26 4 3.1 4 5.39 19 12.5 12 10.84 12 9.13 12 8.15 8 5.56 7 4.82 7 7.26 7 6.42 8 7.91 5 5.68 5 4.74 5 5.73 8 6.89

Table 1: Ascombe’s Quartet

What is is significant about Ascombe’s Quartet is that though all four datasets have the same sample mean, variance, correlation coecient and linear regres- sion, their graphical representation promptly reveals them to be widely di↵erent

6 in nature (see figure 2).

Figure 2: Ascombe’s quartet visualized as scatter-plots. The blue lines are the linear regression of the data-points.

This raises the need for tools for overviewing complex data so that patterns and anomalies can be found. However, there are many hurdles that need to be overcome in order to visualize neural activity patterns. For one, the visual representation of multidimensional data, such as neural activity through time, is graphically challenging; as it requires the representation of a higher dimensional domain in a two dimensional plane. This usually accomplished through the use of abstraction where additional dimensions are mapped to color and geometry (Yau, 2014).

2.5 Color Spaces

Color spaces, also refered to as color models, are mathematical abstractions of color. Instead of representing colors as wavelengths in the electromagnetic spectrum, color spaces represent color perceptually as linear combinations of basis vectors.

7 2.5.1 Monochrome

The simplest imaginable color space able to represent di↵erent numerical values consists of an individual basis vector. Due to there being no other basis vectors, all colors are just the basis vector multiplied with a constant coecient.

While simple to implement, monochrome might not be desirable for visualization since an untrained human with normal can discriminate a few million colors (Leong, 2006) but only about 30 shades of gray (Kreit et al., 2012, for a practical example see figure 3).

Figure 3: 50 shades of gray in a sequential disposition. Any two consecutive shades might be indistinguishable to the human eye.

2.5.2 RGB

The RGB color space derives its name from the three components that it utilizes as basis vectors: red, green and blue. Hence, the color space elicits a cube geometry (see figure 4). These three colors are used as primary colors. Di↵erent amounts of light of the primary colors are added in order to create all possible output colors, hence this is an additive color model.

8 Figure 4: The RGB color space. Source: By Jacob Rus, SharkD, derivative work, CC BY-SA 3.0, via Wikimedia Commons

RGB is commonly used in computer systems since it is the de facto method for specifying colors on a cathode-ray tube or other light-emitting displays (Oxford Reference, 2008).

However, RGB is counter-intuitive to people accustomed to subtractive color models (PhotonStarTecnology Ltd., 2016). People might even be unaware of the existence of additive color models work since subtractive color mixing as that is what is encountered in paints, dyes, ink, pigments and other media. Hence, predicting the resulting color from the parametrization can be a challenging task. It has even been argued that it is virtually impossible since the dimensions do not correspond to any perceptual properties of the resulting color (Zeileis, Hornik, & Murrell, 2009).

2.5.3 HSL

HSL is a rearrangement of the RGB geometry to be more intuitive and perceptu- ally sensible. In this case the rearrangement consists of using polar coordinates to represent the color space in the form of a cylinder (see figure 5).

HSL is considered a natural representation color model since colors are de- composed based on physiological characteristics (such as , saturation and luminance) (Sarifuddin & Missaoui, 2005). However, the luminance component of the HSL model, lightness, does not match human perception (Sarifuddin &

9 Missaoui, 2005). This is because HSL assumes that all of color have the same perceptual luminance which is not the case for human beings. Hence, any form of metric distance such as Euclidean norm or cylindric distance will fail to capture di↵erences in color as perceived by humans (Sarifuddin & Missaoui, 2005)

Figure 5: The HSL color space. Source: By Jacob Rus, SharkD, derivative work, CC BY-SA 3.0, via Wikimedia Commons

2.5.4 HCL

HCL is a color space designed by Sarifuddin and Missaoui (2005) to more ac- curately represent color perceptually. The authors sought to create a more e↵ective color space for content-based image and video retrieval through image processing and computer vision.

10 Figure 6: The HCV color space, similar in appearance to the HCL color space, (for an actual image of the HCL color space please see Sarifuddin & Missaoui, 2005). Source: By Jacob Rus, SharkD, derivative work, CC BY-SA 3.0, via Wikimedia Commons

2.6 Color transfer functions

A transfer function is a function that maps di↵erent data values to an optical property, such as color or opacity (Pfister et al., 2001). There are three types of generated by transfer functions: qualitative, sequential and diverging (Brewer, 1994).

A qualitative color scheme maps arbitrary values to arbitrary colors. However, a qualitative color scheme is only suitable for nominal or categorical data (Brewer, 1994).

Figure 7: An illustrative example of a qualitative color scheme.

The transfer functions for sequential and diverging color schemes, on the other hand, do not operate arbitrarily. Since color spaces are spanned by orthogonal basis vectors, where a color is a linear combinations of the vectors, any paramet- ric curve that traverses the color space could be used to map an input parameter

11 value to an specific color output (Rheingans, 1992).

If the domain and range of the parametric color curve are specified so that values are logically arranged, from high to low, as well as light to dark, or vice versa, the result is a sequential color scheme (Brewer, 1994).

Figure 8: An illustrative example of a sequential color scheme.

On the other hand, if what is desired is to visualize deviation from a critical value, then two light to dark transitions can be placed to either side of the critical value. This is what is called a diverging color scheme (Brewer, 1994).

Figure 9: An illustrative example of a diverging color scheme.

2.7 Heatmaps

Heatmaps, also sometimes called intensity maps or point density interpolation, are visual structures for a spatial visualization of data (DeBoer, 2015). The term heatmap was first coined by Cormack Kiney in the 90s (Neovision Hypersystems, 1998). Originaly developed as a tool for financial analysis, heatmaps have gained broad approval in biological and medical research (Akers, 2015). There even exist prior examples of neural activity visualizations that encode a variable as a color spatially; such as positron emission tomography (PET) and functional magnetic resonance (fMRI)

At a fundamental level, heatmaps are implemented as spatial matrices with cells colored after their values (Huang, 2013). Specifically, heatmaps encode a continuous quantitative variable as a color in space (Peterson, 2009) through a color transfer function to a sequential color scheme (For an illustrative example see figure 10).

12 Figure 10: A heatmap using the grayscale sequential color scheme. High val- ues are mapped to lighter colors and low values to dark colors. The mapping functions domain is 0 to 9 and the range black to white.

2.8 Kernel image filtering

Convolution is a method in image processing that is used to apply character- istics, such as filtering or blurring. When processing an image, convolution is applied to determine a value, like opacity or color, of the pixels in the image. The kernel matrix is a parameter of the convolution that defines the kind of transformation to be applied on the individual pixels. In detail, the value of each pixel is determined by aggregating the values of the pixel and its neighbors multiplied element-wise with the values in the kernel matrix (Ludwig, 2007).

Deconvolution is the reverse process to convolution. Unlike convolution, where each pixel is determined by the neighboring pixel values, deconvolution instead takes an individual pixel with given values and uses its value to determine that of the neighboring pixels.

One of the applications of deconvolution is to recreate images without apriori knowledge of the image being recreated. Recreation might be necessary if ei- ther the image is incomplete or has been distorted or modified by an unknown method. The latter is known as blind deconvolution (Ayers & Dainty, 1988) though the same method can also be applied for reconstruction of the former (Herrity, Raich, & Hero III, 2008).

13 3 Method

This section presents how to build the heatmap visualization tool in theory. This tool graphically represents the data specified on subsection 3.1 Data structure. The visualization maps the neurons location to a 2-dimensional pixel grid, gen- erates frequency values for all pixels based on their distance to the data points and maps these values to color. Time is not mapped, so variation in frequency values across time must be presented as animation. Additionally, suggestions are given in regards of possible user-interaction.

3.1 Data structure

In development of the visualization program, we will use two datasets provided by a research group, lead by professor Tatiana Deliagina, from the Department of Neuroscience at the Karolinska Institute. The datasets contain neural data from a spinal cord section of decerebrated preparations as described in Zelenin et al. (2014). These datasets contain a neuron’s spatial position and a range of firing frequencies over time. The two datasets contain readings of 210 and 107 neurons respectively, both with frequency readings for every 1 ms and a total test time of 407 ms. Since the time intervals are equidistant, the data can be classified as what is known as pure serial periodic data (Carlis & Konstan, 1998).

A mathematical model of the data would be a four dimensional body with laterality, depth, frequency and time as dimensions. Nevertheless, since this is pure serial periodic data, time is uninteresting to visualize and only the three other dimensions are visualized. Furthermore, regular time intervals make it possible to subdivide the body in time into equidistant three dimensional slices of laterality, depth and frequency. It is these slices that are utilized as frames in the heatmap visualization, with laterality being mapped to width, depth to height and frequency to color.

However, the input data files are in comma separated values (CSV) format. CSV files store data in plain text in a tabular format, where every comma delimitates a cell and every breakline a row. Due to this limitation, the four dimensional data is encoded as a two dimensional matrix where each row contains the data of an individual neuron. The rows consist of laterality as well as depth(mm), and additional frequency fields, one for every millisecond of the recording. Frequency fields are arranged sequentially in chronological order and contains the firing frequency(kHz).

14 x y f f i,0 ··· i,406 x y f f 0 0 0,0 ··· 0,406 x1 y1 f1,0 f1,406 . . . ···...... xn 1 yn 1 fn 1,0 fn 1,406 ···

Table 2: CSV data file structure. xi is the laterality of data point i in millime- ters, yi is the depth of data point i in millimeters, fi,t is the firing frequency of data point i at the millisecond t of the test, where 0 i

3.2 Visual mappings

3.2.1 Mapping depth and laterality to on screen coordinates

In the visualization, depth and laterality are converted to pixel coordiates (u, v) by linear scaling. More specifically, by multiplying them by an arbitrary pixel scaling factor proportional to window height. Hence:

x = laterality pixel factor · (3) y = depth pixel factor · Since depth and laterality are given as continuous millimeter coordinates and a pixel grid is discrete, these values would normally require some kind of integer approximations. Nevertheless, since neurons are never directly displayed on screen but only used to calculate pixel frequency values, then no discretization is required (for more detail on how pixels frequencies are calculated see subsection 3.3 Heatmap calculation).

3.2.2 Mapping frequency to color

A color transfer function c(f), where f is the input frequency value, is to be devised. In accordance to what was said in subsection 2.6, there are two things to determine: the color space and the parametric curve. Hence, all possible transfer functions should be implementable in the following manner:

c(f)=g (f)~e + + g (f)~e (4) 1 1 ··· N N

Where g~i(f) are arbitrary functions of frequency, e~i are the basis vectors of the color space and N is the dimensionality of the color space.

15 3.3 Heatmap calculation

The heatmap is generated by calculating frequency for each pixel at coordi- nates (u, v), by accumulating a normally distributed frequency that decays over distance from every data point.

The frequency distribution of an individual data point over the entire image can be modeled after the bivariate Gaussian point spread function(PSF):

2 2 (xi u) +(yi v) 2 I(u, v, i)=I0e 2 where I0 = I(xi,yi)=fi (5)

However, plain aggregation of the frequency distributions would result in ad- ditive artifacts such as those described by DeBoer (2015)(see figure 11 and compare to figure 12), where overlaps result in values of higher frequency than the original data points. This is because the above model does not account for overlap or compensate for it. This is demonstrably erroneous since, by that logic, two identical measurements of the same point would result in twice as much frequency.

Figure 11: Examples of heatmap behavior without normalization. The crosses represent the positions of the data points. A. are two data points at d =2r B. are two data points at d = r and C. is a single data point measured twice.

Figure 12: Examples of heatmap behavior with normalization. The crosses represent the positions of the data points. A. are two data points at d =2r B. are two data points at d = r and C. is a single data point measured twice.

The visualization tool could handle overlap in a similar manner to Zelenin et al. (2014) by normalizing with the sum of weights (see eq.1). However, the weights

16 are calculated in accordance to the model described in eq.5 in the following manner (compare to eq.2):

(x v)2+(y u)2 i i w(u, v, i)=e 22 (6) The equation for calculation pixel frequency values hence becomes:

n w(u, v, i)fi f(u, v)= i=0 (7) Pn w(u, v, i) i=0 P

Where (u, v) are pixel coordinates, for other variables see table 2.

Afterwards, the final step necessary is to convert pixel frequency values into color through a transfer function as described in 3.2.2.

3.4 Animation

Changes in frequency values over time should be presented as an animation. An animation loop is needed to create an animation. The loop continuously replaces the image on screen. To replace the image two procedures are required: an update call and a draw call, executed in that order. The update call changes the contents of the image and the draw call displays the image on the screen.

3.5 Interaction

Rheingans (1992) proposed that interactivity and animation for exploration of quantitative multivariate data has a positive e↵ect in terms of accuracy and user confidence in information gathering. The features employed by Rheingans (1992) will be included. In regards to animation, these consist of the ability to play and pause the animation, step an arbitrary number of frames and cus- tomizable frame rate. In addition, the user will be able to jump to arbitrary points in time by either writing the frame number or using a slider.

While Rheingans (1992) focused on color mappings and animation for visual color-based-systems, the visualization tool will o↵er wider user interaction. Other interactive features include loading own test data, customizable visualization quality, customizable axes and choice of color scheme.

17 4 Results

Anivis was the name given to the software implementation of the visualization tool described in section 3 Method. Anivis successfully implements an interac- tive animated heatmap visualization of neural frequency data (figure 13 left). In addition to the heatmap, Anivis also visualizes the discrete frequency values of individual neurons for comparison (figure 13 right).

In this section, the implementation methods of Anivis will be presented in detail. For the sake of comparison, all figures of the visualizations are of the same frame and dataset.

Figure 13: Visualizations in Anivis. Neural frequency data is presented as animated interactive visualizations. A heatmap to the left and a scatter-plot to the right.

4.1 Tools and design choices

Anivis is intended to be used by a research group working on computers with a Windows operating system (version not specified). As a majority of operating systems (including Windows, OSX and Linux) include a web browser, Anivis

18 was developed as a web application. This format ensures portability and enables execution without prior software installation on the system.

Anivis is written in Javascript, a programming language supported in all mod- ern web-browsers (MozillaDN, 2016). Furthermore, the Javascript environment enables the usage of third party libraries. One such library is D3.js (Data Driven Documents), a Javascript library for document manipulation based on data (Bostock, Ogievetsky, & Heer, 2011). Anivis uses D3.js to parse the data files, manipulate the DOM structure of the web application and create data bindings between DOM elements and data values.

4.2 Scatter-plot implementation

One of the main disadvantages of a heatmap visualization is that it is impossible to determine the exact origin of high value areas. High value areas may originate in either individual data points with extreme values or synchronized activity of several neighboring data points (see figure 14).

Figure 14: A. Heatmap B. Scatter-plot; C. Heatmap with a superposed scatter- plot. All three images display the same dataset. Please notice that even tough the left and right blob look similar in the heatmap, the scatterplot reveals that the left blob originates in synchronized activity and the right one due the extreme value of the central data point.

In order to visualize the causal relationship between the data points and the heatmap, Anivis also creates a scatter-plot of the data points for compari- son(figure 13 right). The scatter-plot is of the same scale as the heatmap. So, in order to place the data points on the screen only the corresponding pixel coordinates are necessary, these are obtained by following the steps on 3.2.1 and then rounding to the nearest integer value. The scatter-plot even uses the same color transfer function with the same domain and range as the heatmap, which turns out to be suitable since the heatmap is normalized, but which is also desirable in order to facilitate comparison.

19 4.3 Heatmap implementation

Two main algorithmic approaches were implemented to calculate the heatmap in the manner described in subsection 3.3. These two approaches were weighted sum (the naive approach) and deconvolution.

Although both approaches aim to visualize the data as the same graphical image, they do so through di↵erent methods. In the weighted sum approach, each pixel’s frequency value is increased for every data point based on its distance to the data point and its frequency. The other approach is deconvolution. In contrast with convolution, where the color values of neighboring pixels are used to determine the value of each pixel; in deconvolution, the value of individual data-points are used to determine the color of adjacent pixels.

4.3.1 Weighted sum

The weighted sum is carried out as described by eq. 7 in subsection 3.3, by employing three di↵erent nested loops. Nevertheless, this approach has an O(umaxvmaxn) time complexity (where umax is the visualizations width in pixels and vmax its height) and might be too slow for general usage.

However, this algorithm can be made practically employable through memo- ization. Since all data points remains at fixed positions through time, and the weights values are dependent only on their distance to the data point, then all weights need only be calculated once for the entire animation. Furthermore, even the weight sums can be pre-calculated. This e↵ectively eliminates all op- eration inside the divisor sums, and replaces them for look-ups. In summary, while this optimization does not reduce the time complexity of any operations it reduces the number of necessary operations inside the triple loop to a single one: multiplication between the weight and the frequency of the data point.

An additional heuristic, that synergizes with memoization, is to not perform any calculations if the weights are small enough (wi < "). Since the value of the weights decreases exponentially, large portions of the image should be near zero (or zero due to float precision) and can be safely ignored. The reason these regions can be ignored without any loss in quality is that color spaces have a limited resolution.

The exact size of "max so that no quality degradation occurs cannot be univer- sally determined. This is because it depends entirely on the resolution of the color space, but is also a↵ected by the parametric color curve. Since how close " is to "max has only a marginal e↵ect in performance, a value of " =0.001 was used as a heuristic.

20 4.3.2 Deconvolution

A more sophisticated approach than weighted sum with memoization, is to calculate the heatmap by taking advantage of dynamic programming instead of memoization. Since only the pixels close enough to the data points will have their frequency values significantly increased, and all distant values will be near zero or actually zero due to limitations in resolution, there is no need to traverse the entire canvas adding value for every data point.

Instead, a m m Gaussian kernel K is calculated as follows: ⇥

k0,0 k1,0 km 1,0 ··· (i m )2+(j m )2 k0,1 k1,1 km 1,1 2 2 ··· 2 Km,m = 0 . . . . 1 where k(i, j)=e 2 . . .. . B C Bk0,m 1 k1,m 1 km 1,m 1C B ··· C @ A (8)

Afterwards only the aggregate of individual contributions of each data point to neighboring pixels needs to be calculated as follows:

1 for (datapoint in data) { 2 // For every number in the matrix 3 for (var j = 0; j < m; j++) { 4 // Calculate the pixel ’s y coordinate 5 y=datapoint.y floor(sideLength/2) + j; 6 for (var i = 0; i < m; i++) { 7 // Calculate the pixel ’s u coordinate 8 u=datapoint.x floor(sideLength/2) + i; 9 // If that pixel is outside the image do nothing 10 if(u<0 v<0 u >=umax v >=vmax) continue ; || || || 11 12 // Otherwise add to that pixels value 13 back buffer [v][u] += datapoint . f [time] K[ j ] [ i ] ; ⇤ 14 } 15 } 16 }

After this process is complete, it is necessary to normalize all the values. The main challenge of normalizing the values is generating the sum of all the kernel values at intersections between matrices. However, this sum can be achieved through deconvolution by modifying only one line of code in the above algorithm:

1 // Otherwise add to that pixels value 2 weight sum [v ] [ u] += K[ j ] [ i ] ;

21 Afterwards, all that is needed to normalize the back bu↵er is to divide its fre- quency values with the sum of all kernel weights at that pixel.

The reason calculation of kernel weights was not included in the pseudocode for deconvolution is that deconvolution benefits from memoization as well. Since the data points do not move, the sum of kernel weights need only be calculated once for the entire animation. Unlike calculation of all weights in the weighted sum approach, which is slow in comparison to frame rendering in spite of having the same time complexity, calculation of the kernel weight sum is equivalent to calculating an additional extra frame when the animation is started.

Deconvolution has a time complexity of O(m2n) since the calculation of pixel fre- quency values dominates over normalization of the values which is only O(m2). This o↵ers much better performance in comparison to weighted sum (O(umaxvmaxn)) as m is likely to be smaller than the visualizations width (umax) and height (vmax). In fact, a kernel of dimensions umax vmax would be functionally equal to weighted sum. ⇥

While a smaller kernel improves performance, a kernel of insucient size will lead to block artifacts in the heatmap visualization as the weights at the matrix edges will not be near zero (see figure 15).

Figure 15: Block artifacts created by deconvolution with a kernel of insucient size (50x50).

22 On the other hand deconvolution with a kernel of sucient size is functionally identical to the weighted sum approach (see figure 16 and (Duchowski et al., 2012)).

Figure 16: A. Heatmap produced by using weighted sum B.Di↵erence image between A and C; C. Heatmap produced by deconvolution with a large kernel still smaller than the image size.

4.4 Color transfer functions

Four di↵erent transfer functions were implemented, grayscale, HSL Rainbow, Cubehelix and HCL Heat. Grayscale and HSL Rainbow are our own implemen- tation of common color schemes; Cubehelix and HCL Heat are custom transfer functions and color schemes. HCL Heat we have designed ourselves and Cube- helix was designed by Green (2011).

Before going into individual transfer functions, it should be mentioned that not only are users able to choose the preferred transfer function but also customize it by selecting the input domain bounds. Any values outside the domain bounds are clamped to the closest domain bound. Where the floor is fo↵set and the ceiling fceiling . These values will be treated as numerical constants although in practice they function as parameters. By default when Anivis is booted fo↵set = 0 and fceiling = fmax.

4.4.1 Grayscale

Grayscale is a straight line from (0, 0, 0) to (1, 1, 1) in the RGB color space. Hence, the parametric color curve function is:

23 f + fo↵set Red fceiling f c(f)= Green = + fo↵set (9) 2 fceiling 3 2 3 f Blue + fo↵set 6 fceiling 7 4 5 4 5 While not optimal for visualization of data due to the reasons stated in 2.5.1, there are uses for the grayscale transfer function such as monochrome printing.

Figure 17: Anivis heatmap through the grayscale transfer function.

4.4.2 HSL Rainbow

The HSL Rainbow was implemented as a circle arc around the cylindrical HSL color space (see fig. 5).

lightnessmax The HSL Rainbow arc has a height of 2 , a radius of saturationmax and 3⇡ a range of [0, 4 ]. Hence, the parametric color curve function is as follows:

24 3⇡ f 3⇡ Hue ( + fo↵set) 4 fceiling 4 c(f)= Saturation = 1 (10) 2 3 2 3 Lightness 0.5 4 5 4 5 While not optimal for visualization of data due to the reasons stated in 2.5.3, this transfer function was implemented in Anivis by request of the research group at the Deparment of Neuroscience.

Figure 18: Anivis heatmap through the HSL Rainbow transfer function.

4.4.3 Cubehelix

The transfer function of the Cubehelix color scheme is implemented through the cubehelix.js library; a plug-in library for D3.js that implements Cubehelix.

What distinguishes the Cubehelix color scheme from HSL rainbow is that the Cubehelix color scheme is designed to account for perceptual di↵erences of brightness between the colors. Hence, higher values are assigned brighter colors and lower values darker colors.

25 Figure 19: Anivis heatmap through the Cubehelix transfer function.

4.4.4 HCL Heat

HCL Heat is an ascending arc in the HCL color space. This transfer function was specifically created for Anivis and is meant to be a compromise between the HSL Rainbow and Cubehelix.

13⇡ f 11⇡ Hue ( + fo↵set) 9 fceiling 9 c(f)= Chroma = 2 1 3 (11) 2 3 f Luminance 0.05 + 0.8( + fo↵set) 6 fceiling 7 4 5 4 5 Not only are low values perceptually darker, but also cooler colors, in the same manner that high values both brighter and warmer. Cool colors are those which are perceived to be receding or ground, while warm colors are salient instead (Oxford Reference, n.d.).

26 Figure 20: Anivis heatmap through the HCL Heat transfer function.

4.5 Rendering

Anivis uses two bu↵ers for calculating the visualization each frame, a back bu↵er and a display bu↵er. The back bu↵er is used to store calculated pixel frequency values while the display bu↵er stores pixel color values. Both have a memory complexity of ✓(umaxvmax), as the amount of pixels grows quadratically in relationship to canvas size, but canvas width and height need not be equal.

The back bu↵er is updated through the update call. The update call calculates each pixels frequency value as described in subsection 4.3 and stores them in a back bu↵er. Each frame is calculated on request and when calculating a new frame, the back bu↵er is overwritten. The display bu↵er is updated through the draw call. A draw call converts every frequency value in the back bu↵er to color and then stores them in the display bu↵er. When draw terminates, the display bu↵er contains the computed image data employed by the HTML5 canvas to display the graphics on screen.

27 4.6 Animation

4.6.1 Animation loop

Anivis represents changes in firing frequency over time as animation by updating the image data in an HTML5 canvas. To create the animation, static images are displayed in quick succession in the canvas through the processes described in subsection 4.5 rendering; we will refer to these individual static images as frames.

Frames are drawn on a request-basis as to enable the user to freely move to arbitrary points in time during and outside any playing animation. To draw a frame, the frequencies for each pixels in a frame must be calculated and then stored in the back bu↵er through an update call and then converted to color and sent to a display bu↵er through the draw call.

Normally animation could be implemented by a simple animation loop (as de- scribed in subsection 3.4) or by setting up a callback function to execute at regular intervals of time. However, both approaches are flawed, since web page scripts are executed sequentially and in the same thread as the user interface (Caballero, 2013; Resig, 2008). The first approach would permanently lock the user interface, as the scheduler waits for the infinite animation loop to finish. The second approach, implemented through setInterval(), was functional enough to be used in the first working versions of Anivis, even though it is not technically correct.

The reason setInterval() is not correct in regards the amount of frames per second (FPS), is that the frame rate will not match the specified value. This is because setInterval() does not implement the intervals by executing the callback on a separate thread but through scheduling in the current thread, and hence, if blocked from immediate execution, the callback will be postponed until the next possible execution point (Resig, 2008).

Anivis implements the animation loop through requestAnimationFrame();which takes a callback as argument and sets it up for execution just before repainting the contents of the window (i.e. the browser’s draw call). However, requestAnimationFrame() only sets up the callback for the next repaint. Hence, the animation-function must repeatedly re-schedule itself to be executed before each frame.

4.6.2 Frame rate capping

Since Anivis implements the animation loop through a callback-by-request ba- sis. Inside the callback itself there is a timer that compares the time when

28 previous frame was requested with the designated frame rate. If enough time has passed, the animation function will request for a new frame to be pro- cessed and animated and save what time the new frame was requested. If the designated time has not elapsed, the function will do nothing. Regardless; each call of the animation function will push it self to be called back with the requestAnimationFrame().

4.7 Interactivity

Anivis is an interactive tools that allows the user to control the visualization as well as the which data to visualize. Anivis is able to visualize any data of the in format described in subsection 3.1 and can change which data to visualize dynamically. The animation can be controlled be either a play-pause feature or by directly selecting the desired frame by number. The frame rate can also be changed dynamically. The the color scale can also be changed dynamically for both the heatmap and the scatter-plot.

29 5 Discussion

This section will provide an critical analysis of the design and implementations presented in section 4 Results.

5.1 Heatmaps in Anivis

While this project is partially based upon the heatmap produced by Zelenin et al. (2014), there are fundamental di↵erences between the heatmaps visualizations in that article and those produced by Anivis.

Format wise, Zelenin et al. (2014) groups frequency values into bins and averages in time through sliding average. However, Anivis needs not aggregate lapses of time as it can display changes in frequency as animation.

Another di↵erence in terms of calculations is that Anivis employs a di↵erent kind of Gaussian weighting function(compare eq.2 to 6) modeled after the Gaus- sian PSF (point spread function). Nevertheless, both weighting functions still produce indistinguishable heatmap images (see figure 21).

Figure 21: A. Heatmap produced by Anivis B.Di↵erence image between A and C; C. Heatmap from a modified version of Anivis that employs the same Gaussian weight formula as Zelenin.

30 5.2 Image quality of deconvolution

It was shown in 4.3.2 that a large kernel will produce identical results to the weighted sum approach(fig. 16) and a small kernel results in visual artifacts (fig.15). However, what was not mentioned is what happens for kernel sizes in between. E↵ectively, there is an interesting middle-ground where no visual artifacts occur but the image still does not entirely match the weighted sum result (fig. 22).

Figure 22: A. Heatmap produced by using weighted sum B.Di↵erence image between A and C; C. Heatmap produced by deconvolution with a kernel smaller than the image size (150x150).

This middle ground is, at least perceptually, indistinguishable from larger kernels or the weighted sum approach. This is significant because any reduction the size of the kernel will significantly increase performance in terms of frame rate, as the number of calculations is directly proportional to the product of the amount of data points; the kernel’s height and width; but not the image size.

Which means that while it might be possible to calculate the minimum kernel size that still produces images identical to weighted sum, that kernel size might still not be optimal. This is because the only purpose of the heatmap is to visualize data and both images are arguably equally as ecient to that end. For that reason we suggest that if an optimal kernel size is to be found it should be through perceptual studies rather than mathematical calculations.

Ironically, however, a fixed optimal kernel size might not be optimal in itself. This is because while a users seeking to extract individual images from the

31 application might require image fidelity, users interested in creating videos or fluid interaction might rather prioritize frame-rate over image quality, even if there are perceptible di↵erences in image quality. Instead we argue that the user should be able to interactively be able choose the quality level according to their needs.

5.3 Scatter-plot as a complement to heatmap visualization

As mentioned in subsection 4.2, Anivis implements a scatter-plot as a way to complement the heatmap. While heatmaps may be useful for identification of clustering in the data, heatmaps have e↵ectively no way of distinguishing between data points with extreme values and clusters in the data, as shown in 4.2. This raises the question whether or not the heatmap itself is sucient as visual structure for the purpose of this kind of data-analysis.

The scatter-plot was originally implemented in Anivis in order to compare the data point to the heatmap. As development continued, it became apparent that the scatter-plot was necessary in order to make the images produced by Anivis unambiguous; both for the intended end-user and for the purpose of this report.

However, having both visualizations side by size may make dicult to pin-point specific locations in both the heatmap and the scatterplot. A possible solution is to superpose the scatter-plot as an overlay to the heatmap (as was done in figure 14 C) so that any regions can accurately be related to the data-points in the canvas.

5.4 Future research

Anivis shows it is possible to implement a heatmap through CPU calculations. However, this might not be the best approach since a GPU (Graphical Process- ing Unit) approach would likely outperform the current CPU implementation. This is because the calculations are performed pixel-wise and hence can be eas- ily parallelized, and while current processors have multiple cores in the single digits, GPUs may contain up to thousands of cores (What is GPU Computing?, n.d.).

32 6 Conclusion

Anivis is an successful implementation of animated heatmaps of spatially dis- tributed neural frequency data. A heatmap is a continuous representation of data whereas the data gathered consists of discrete measurements. Therefore, the processing of the data consists of interpolating and extrapolating frequency values for every pixel in the visualization. Once calculated these values should additionally be converted to color.

During development it was shown that deconvolution was the most ecient approach for spatial interpolation of frequency values, both theoretically and practically. Benchmarking was deemed unnecessary since both methods had a large di↵erences in execution time.

The implementation of the scatter-plot was an appropriate design choice as it addresses one of the main weaknesses of the visual structure of heatmaps. Since there no way to di↵erentiate clustering from extreme values, we would like to recommend other heatmaps with weighted data points to superimpose a scatter plot.

In terms of future research, it would probably be of interest to parallelize the suggested approaches both weighted sum and deconvolution as a GPU imple- mentation and to determine an optimal kernel size through perceptual studies.

33 References

Akers, W. (2015). Visual resource monitoring for complex multi-project environ- ments. International Journal of System of Systems Engineering, 6 (1/2), 112. Retrieved from http://dx.doi.org/10.1504/IJSSE.2015.068814 doi: 10.1504/ijsse.2015.068814 Anscombe, F. J. (1973, feb). Graphs in statistical analysis. The American Statistician, 27 (1), 17. Retrieved from http://dx.doi.org/10.2307/ 2682899 doi: 10.2307/2682899 Ayers, G. R., & Dainty, J. C. (1988, jul). Iterative blind deconvolution method and its applications. Optics Letters, 13 (7), 547. Retrieved from http:// dx.doi.org/10.1364/OL.13.000547 doi: 10.1364/ol.13.000547 Bostock, M., Ogievetsky, V., & Heer, J. (2011). D3: Data-Driven Documents. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis).Retrieved from http://vis.stanford.edu/papers/d3 Brewer, C. A. (1994, may). Guidelines for use of the perceptual dimensions of color for mapping and visualization. In J. Bares (Ed.), Color hard copy and graphic arts III. SPIE-Intl Soc Optical Eng. Retrieved from http://dx.doi.org/10.1117/12.175328 doi: 10.1117/12.175328 Caballero, L. (2013). Dev.Opera — Better Performance With requestAnimation- Frame. Retrieved 2016-4-17, from https://dev.opera.com/articles/ better-performance-with-requestanimationframe/ Card, S. K., Mackinlay, J. D., & Shneiderman, B. (1999). Readings in informa- tion visualization: using vision to think. Morgan Kaufmann. Carlis, J. V., & Konstan, J. A. (1998). Interactive visualization of serial periodic data. In Proceedings of the 11th annual ACM symposium on user interface software and technology - UIST ’98. Association for Computing Machinery (ACM). Retrieved from http://dx.doi.org/10.1145/288392.288399 doi: 10.1145/288392.288399 DeBoer, M. (2015). Understanding the Heat Map. Carto- graphic Perspectives(80), 39–43. Retrieved from http:// www.cartographicperspectives.org/index.php/journal/article/ view/cp80-deboer/1420 doi: 10.14714/CP80.1314 Duchowski, A. T., Price, M. M., Meyer, M., & Orero, P. (2012). Aggregate gaze visualization with real-time heatmaps. In Proceedings of the symposium on eye tracking research and applications - ETRA ’12. Association for Computing Machinery (ACM). Retrieved from http://dx.doi.org/10 .1145/2168556.2168558 doi: 10.1145/2168556.2168558 Fetcho, J., & O’Malley, D. M. (1995). Visualization of active neural circuitry in the spinal cord of intact zebrafish. Journal of neurophysiology, 73 (1), 399–406. Goldstein, E. (2014). Sensation and perception. Belmont, CA: Wadsworth, Cengage Learning. Green, D. (2011). A colour scheme for the display of astronomical intensity images. arXiv preprint arXiv:1108.5083 . Retrieved from http://astron -soc.in/bulletin/11June/289392011.pdf

34 Handbook of laboratory and diagnostic tests. (2013). Farlex and Partners. Retrieved from http://medical-dictionary.thefreedictionary.com/ PET+scan+of+the+brain Herrity, K., Raich, R., & Hero III, A. O. (2008, feb). Blind reconstruction of sparse images with unknown point spread function . In C. A. Bouman, E. L. Miller, & I. Pollak (Eds.), Computational imaging VI. SPIE-Intl Soc Optical Eng. Retrieved from http://dx.doi.org/10.1117/12.779253 doi: 10.1117/12.779253 Honeycutt, C. F., Gottschall, J. S., & Nichols, T. R. (2009, mar). Electromyo- graphic Responses From the Hindlimb Muscles of the Decerebrate Cat to Horizontal Support Surface Perturbations. Journal of Neurophysiol- ogy, 101 (6), 2751–2761. Retrieved from http://dx.doi.org/10.1152/ jn.91040.2008 doi: 10.1152/jn.91040.2008 Hsu, L.-J., Zelenin, P. V., Orlovsky, G. N., & Deliagina, T. G. (2012, apr). E↵ects of galvanic vestibular stimulation on postural limb reflexes and neurons of spinal postural network. Journal of Neurophysiology, 108 (1), 300–313. Retrieved from http://dx.doi.org/10.1152/jn.00041.2012 doi: 10.1152/jn.00041.2012 Huang, M. L. (2013). Innovative Approaches of Data Visualization and Visual Analytics. IGI global. Kindlmann, G. L., Weinstein, D. M., Jones, G. M., Johnson, C. R., Capecchi, M. R., & Keller, C. (2005). Practical vessel imaging by computed to- mography in live transgenic mouse models for human tumors. Molecular imaging, 4 (4), 417. Kreit, E., Mathger, L. M., Hanlon, R. T., Dennis, P. B., Naik, R. R., Forsythe, E., & Heikenfeld, J. (2012, sep). Biological versus electronic adaptive coloration: how can one inform the other? Journal of The Royal Society Interface, 10 (78), 20120601–20120601. Retrieved from http://dx.doi .org/10.1098/rsif.2012.0601 doi: 10.1098/rsif.2012.0601 Leong, J. (2006). Number of Colors Distinguishable by the Human Eye. Retrieved 2016-04-18, from http://hypertextbook.com/facts/2006/ JenniferLeong.shtml Ludwig, J. (2007). Image Convolution. Retrieved from \url{http:// web.pdx.edu/~jduh/courses/Archive/geog481w07/Students/ Ludwig ImageConvolution.pdf} MozillaDN. (2016, apr 1). JavaScript. https://developer.mozilla.org/ en-US/docs/Web/JavaScript. (Accessed: 2016-04-16) Muto, A., Ohkura, M., Abe, G., Nakai, J., & Kawakami, K. (2013, feb). Real-Time Visualization of Neuronal Activity during Perception. Current Biology, 23 (4), 307–311. Retrieved from http://dx.doi.org/10.1016/ j.cub.2012.12.040 doi: 10.1016/j.cub.2012.12.040 Neovision Hypersystems, I. (1998, 03 3). Heatmaps (Trademark No. US 75263259). Retrieved from http://tsdr.uspto.gov/documentviewer ?caseId=sn75263259&docId=ORC20060126060905#docIndex=4&page=1 Oxford Reference. (n.d.). colour temperature. Retrieved from //www .oxfordreference.com/10.1093/oi/authority.20110803095625612

35 Peterson, G. (2009). GIS Cartography. Informa UK Limited. Retrieved from http://dx.doi.org/10.1201/9781420082142 doi: 10.1201/ 9781420082142 Pfister, H., Lorensen, B., Bajaj, C., Kindlmann, G., Schroeder, W., Avila, L., . . . Lee, J. (2001). The transfer function bake-o↵. IEEE Comput. Grap. Appl., 21 (1), 16–22. Retrieved from http://dx.doi.org/10.1109/38.920623 doi: 10.1109/38.920623 PhotonStarTecnology Ltd. (2016). ”how leds produce white light”. Retrieved 2016-5-11, from http://www.photonstartechnology.com/ learn/how leds produce white light Resig, J. (2008). How JavaScript Timers Work. Retrieved 2016-4-17, from http://ejohn.org/blog/how-javascript-timers-work/ RGB color model. (2008). Retrieved 2016-05-08, from http:// www.oxfordreference.com/view/10.1093/oi/authority .20110803100418271 Rheingans, P. (1992). Color change, and control of quantitative data display. In Visualization, 1992. visualization’92, proceedings., ieee conference on (pp. 252–259). Institute of Electrical & Electronics Engineers (IEEE). Retrieved from http://dx.doi.org/10.1109/visual.1992.235201 doi: 10.1109/visual.1992.235201 Sarifuddin, M., & Missaoui, R. (2005). A new perceptually uniform color space with associated color similarity measure for content-based image and video retrieval. In Proc. of acm sigir 2005 workshop on multimedia information retrieval (mmir 2005) (pp. 1–8). Spittler, J. F., Wortmann, D., von During, M., & Gehlen, W. (2000, jun). Phenomenological diversity of spinal reflexes in brain death. Eur J Neurol, 7 (3), 315–321. Retrieved from http://dx.doi.org/10.1046/ j.1468-1331.2000.00062.x doi: 10.1046/j.1468-1331.2000.00062.x Tory, M., & Moller, T. (2004, jan). Human factors in visualization research. IEEE Trans. Visual. Comput. Graphics, 10 (1), 72–84. Retrieved from http://dx.doi.org/10.1109/tvcg.2004.1260759 doi: 10.1109/tvcg .2004.1260759 What is GPU Computing? High-Performance Computing NVIDIA NVIDIA. (n.d.). Retrieved 2016-05-10,| from http://www.nvidia.com/object/| | what-is-gpu-computing.html Yau, N. (2014). FlowingData. com Data Visualization Set. John Wiley & Sons. Zeileis, A., Hornik, K., & Murrell, P. (2009). Escaping rgbland: selecting colors for statistical graphics. Computational Statistics & Data Analysis, 53 (9), 3259–3270. Zelenin, P. V., Hsu, L.-J., Lyalka, V. F., Orlovsky, G. N., & Deliagina, T. G. (2014, nov). Putative spinal interneurons mediating postural limb reflexes provide a basis for postural control in di↵erent planes. European Journal of Neuroscience, 41 (2), 168–181. Retrieved from http://dx.doi.org/ 10.1111/ejn.12780 doi: 10.1111/ejn.12780

36