Quick viewing(Text Mode)

Open Jmason Dissertationfinal.Pdf

Open Jmason Dissertationfinal.Pdf

The Pennsylvania State University

The Graduate School

College of Earth and Mineral Sciences

VISUAL PERSPECTIVES AND DECISION-MAKING

IN GEOSPATIAL UNCERTAINTY

A Dissertation in

Geography

by

Jennifer Smith Mason

© 2018 Jennifer Smith Mason

Submitted in Partial Fulfillment of the Requirements for the Degree of

Doctor of Philosophy

May 2018

The dissertation of Jennifer Smith Mason was reviewed and approved* by the following:

Alexander Klippel Dissertation Adviser Professor of Geography Chair of Committee

Alan MacEachren Professor of Geography

Stephen Matthews Professor of Sociology, Anthropology, and Demography, Courtesy appointment in Geography

David W. Titley Professor of Practice of Department of Meteorology

Jason Dykes Special Member Professor of School of Mathematics, Computer Science & Engineering, Department of Computer Science at City, University of London

Cynthia Brewer Professor of Geography Head of the Department of Geography

*Signatures are on file in the Graduate School.

ii

Abstract

This dissertation addresses visual perspectives and decision-making in geospatial uncertainty and fills a gap in current research by presenting three articles in a holistic approach spanning different areas of the field. The first paper in the dissertation presents an overview of the entire field of geospatial uncertainty through the creation of a typology and its visualization - the visual summary. The typology and its visualization were developed and refined based on feedback from other experts in the field, highlights smaller research domains within the entire realm and shows some of the major relationships between them. The overall usefulness of the visual summary can be shown in its ability to be used to summarize both individual papers at a glance as well as summarize the entire field. This visual summary extends current existing approaches to taxonomies by allowing users a quick visual overview of relevant topics in a research area at a glance. The second paper, an introductory article for a special issue on visually-supported reasoning under uncertainty, utilizes the visual summary as shown in paper one to show both the typology and the utility of its graphic representation for quickly summarizing each article within the field as a whole and also showing their unique contributions to the different sub-domains. It also allows for direct comparison between the three articles published in the special issue and visual identification of research topics not covered from the field. The final paper looks at various factors including individual differences and different characteristics to identify their relationship in making evacuation decisions. This was conducted through an iterative process in which results from the first study informed the two subsequent studies to compose a more synergistic and comprehensive understanding of decision- making under uncertainty for an approaching hurricane. Study one showed that having more certainty in flooding correlated to higher evacuation rates with marginal significance. Participants also paid attention to the flood height category, stating they would choose to evacuate most in the highest flood height zone closest to the ocean and least in the lowest blue flood height zone found farthest from the ocean. Evacuation rates were also higher overall in a mild flood scenario over a more severe scenario. This led to exploring the further and finding that by using real data, the mild flood scenario a lower flood height zone was adjacent to the ocean instead of the highest flood height zone, potentially influencing the results. In study two, the maps in the mild flood scenario were redrawn to close the gap in the highest flood zone

iii

so that only this zone occurred adjacent to the ocean. The data showed that participants again evacuate more in the higher flood zones and mild flood scenarios. Study three attempted to disentangle how distance to the ocean and the flood height zones impacted decisions. The results revealed that participants chose to evacuate more when closer to the flood source (i.e. ocean), and once farther from the source, they use flood height as a strategy for choosing when to evacuate.

iv

Table of Contents

List of Figures ...... vi List of Tables ...... viii Acknowledgments ...... ix Introduction ...... 1 Chapter 1 ...... 4 Domains of Uncertainty Visualization Research: A Visual Summary Approach ...... 5 Introduction ...... 5 Visual Summaries ...... 6 Domains of Uncertainty Visualization Research ...... 8 Example Application and Analysis ...... 18 Validation of Visual Summaries ...... 20 Conclusions and Outlook ...... 24 Appendix A. Papers Classified Under the Uncertainty Domains Visualization ...... 37 Chapter 2 ...... 41 Approaching Spatial Uncertainty Visualization to Support Reasoning and Decision-Making ...... 42 Introduction ...... 42 Outlook ...... 52 References ...... 53 Chapter 3 ...... 55 Visualizing Storm Surge: A Holistic Approach for Assessing Factors in Uncertain Storm Surge Evacuation Decisions ...... 56 Introduction ...... 57 Previous Research ...... 61 Study 1 ...... 63 Results ...... 69 Study 2 ...... 75 Results ...... 77 Study 3 ...... 82 Results ...... 86 Discussion ...... 93 Outlook ...... 95 References ...... 98 Overall Conclusion and Outlook ...... 101 References ...... 105

v

List of Figures

1.1 Workflow for the modified Affinity Diagramming Method. 11 1.2 Domains of uncertainty visualization research divided into three main 17 categories. (A) Blue: Visualization Techniques, (B) green: User Effects, and (C) purple: Stimulus Effects. The graphic is split into multiple parts for text legibility. 1.3 Uncertainty Domains Process 18 1.4 Visual summary for Finger and Bisantz (2002) 21 1.5 The figure shows a summary of research papers that were analyzed and 24 visually summarized, that is, a total of 40 papers. To create this graphic we simply summed over all visual summaries creating a number for each tertiary domain that reflects how often it has been addressed in research papers. 1.6 Visual summaries for Finger and Bisantz (2002), one for each experiment 28 (experiment one on the left and experiment two on the right). 2.1 Visual Summary of Salap-Ayca and Jankowski (2016) 49 2.2 Visual Summary of Ruginski et al. (2016) 51 2.3 Visual Summary of Riveiro (2016) 52 3 Visual summary graphic showing the domains covered in the third article 56 of the dissertation. 3.1 Template of the National Hurricane Center flood map released in 2014 60 3.2 Example of storm surge flood map with point A as the location for 66 potential evacuation. 3.3 Two maps showing the difference in severity of the flood (left: mild, 67 right: severe). 3.4 Four maps displaying the placement of the location within each of the 68 four flood heights. Blue: up to 3 feet above ground, yellow: greater than 3 feet above ground, orange: greater than 6 feet above ground, and red: greater than 9 feet above ground. 3.5 Two maps revealing the different coastline orientations (left: horizontal, 68 right: vertical) 3.6 Six maps indicating the random placement of 6 points in one flood height 69 condition. 3.7 Bar of the percent evacuated for each flood zone and severity (mild 71 versus severe) for the 10% survey. 3.8 Bar chart of the percent evacuated for each flood zone and severity (mild 74 vs. severe) for 30% survey. 3.9 The mild flood scenario graphic on the left shows the discontinuous band 76 of red, the highest flood height of 9+ feet. The severe scenario on the right displays a red flood height continuous along the coastal area. 3.10 Old (left) and newly redrawn flood maps (right), enclosing the red zone to 77 serve as a single continuous band along the ocean and buffer between the ocean and orange flood zone. 3.11 Bar chart of the percent evacuated for each flood zone and severity (mild 80 versus severe) for the 10% survey. 3.12 Bar chart of the percent evacuated for each flood zone and severity (mild 82

vi

vs. severe) for 30% survey. 3.13 Revised map with the straighter coastline. 84 3.14 Map showing 6 points equally spaced apart, moving away from the ocean 85 on each of the four map orientations. Actual points used in the study were randomly chosen and placed on the axis parallel to the coastline. 3.15 Flood maps showing the mild (top) and more severe (below) scenarios as 86 well as the 6 points placed in equally spaced distances moving farther from the ocean (left to right). Points at distance 1 (closest to ocean) and distance 4 have locations in two different flood height categories (red and orange, and yellow and orange respectively). 3.16 Bar chart of the percent evacuated for each distance ranging from 1 89 (closest to ocean) to 6 (farthest from ocean) 3.17 Bar chart of the percent evacuated for each flood height category by color 90 (red, orange, and yellow respectively). 3.18 Bar chart of the percent evacuated for each distance ranging from 1 92 (closest to ocean) to 6 (farthest from ocean) and flood height category by color (red, orange, and yellow respectively). 3.19 Maps showing different flood height zones at distance 1 (top, red and 93 orange respectively) and distance 4 (bottom, orange and yellow respectively).

vii

List of Tables

1.1 Domains of uncertainty visualization research (User Effects, Visualization 12 Techniques, and Stimulus Effects), their tertiary domains, and brief descriptions of each. 3.1 Evacuation results for different flood heights with a 10% probability. Each 70 value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. 3.2 Evacuation results for different flood heights with a 30% probability. Each 73 value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. 3.3 Evacuation results for different flood heights with a 10% probability. Each 79 value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. 3.4 Evacuation results for different flood heights with a 30% probability. Each 81 value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. 3.5 Breakdown of the placement of the different points including their distance 86 from the ocean, the flood height category they fall within, the number of points randomly placed at each distance, and in which scenario they occur (mild or severe). 3.6 Evacuation results for each distance with a 10% probability. Distances 88 ranged from 1 (closest to ocean) to 6 (farthest from ocean). Each value indicates the number of times the participants chose to evacuate or not with their respective percentages. 3.7 Evacuation results for different flood heights with a 10% probability. Each 91 value indicates the number of times the participants chose to evacuate or not with their respective percentages.

viii

Acknowledgments

Before I began this journey, I don’t think that I ever believed I could actually attain a Ph.D. When I started in the geography program at Penn State, I was intimidated and overwhelmed. From the very beginning, my advisor Alex Klippel spent as much time as I needed to give thoughtful feedback and advice. It was a very long seven years, with a new marriage, two new kids, and a move across the country (twice) in between. Without my advisor, I truly don’t know if I could have finished. I was also incredibly fortunate to have an amazing committee. Thanks to Alan MacEachren for his invaluable feedback, knowledge, and time. Thanks to Stephen Matthews for an incredible summer internship in Chicago, an exciting year of research in IGERT, and having such a warm and welcoming attitude. Thanks to Dave Titley for his great ideas, conversations, and connections in hurricane research. Finally, thanks to Jason Dykes for being an extremely fun, engaged, and thoughtful mentor at City University London. For my lifelong Penn State friends, thank you for helping make the rough times amazing and fun. Finally thank you to my family. The endless calls to my parents who gave me so much love and support on this journey really helped make this a positive experience. Thank you to my husband, Travis Mason, there every step of the way to believe in me and help push me to finally finish this dissertation. Finally, thank you to my two daughters, both born during my time as a Ph.D. student. You put everything in , and you’ve taught me to enjoy life more and to not stress about the small things. This dissertation is for you. The following are acknowledgments for the included publications and grant source. Mason, J., Retchless, D., and Klippel, A. (2016). Domains of Uncertainty Visualization Research: A Visual Summary Approach. and Geographic . 44(4), 296-309. This is the authors accepted manuscript of an article published as the version of record in Cartography and Geographic Information Science on 8th March 2016. http://www.tandfonline.com/ https://doi.org/10.1080/15230406.2016.1154804

Mason, J., Klippel, A., Bleisch, S., Slingsby, A., and Deitrick, S. (2016). Approaching Spatial Uncertainty Visualization to Support Reasoning and Decision-Making. Spatial Cognition and Computation: An Interdisciplinary Journal. Special Issue on Visually-Supported Reasoning with Uncertainty. 16(2), 97-105. This is an Accepted Manuscript of an article published by Taylor & Francis in Spatial Cognition and Computation: An Interdisciplinary Journal on 21st March 2016, available online: http://www.tandfonline.com/10.1080/13875868.2016.1138117

This work was supported by the National Science Foundation under IGERT Award #[DGE- 1144860], Big Data Social Science, and Pennsylvania State University. The findings and conclusions do not necessarily reflect the view of the funding agency.

ix

Introduction

“A map says to you, ‘Read me carefully, follow me closely, doubt me not.’ It says, ‘I am the earth in the palm of your hand. Without me, you are alone and lost.’”

(Markham, 2012)

Maps assist users in comprehending spatial ideas and relationships. As Harley (1989) implies, maps not only frame users' comprehension of the world, but also shape their senses of different places and their constructions of space. Understanding the power maps have over their users, mapmakers must take extreme caution in their choice of graphic representation in order to avoid user misinterpretation of geographic information. However, Harley argues that a common misconception among maps users is that all maps represent reality. This naivety ignores the mapmaking process and fails to recognize that maps only represent a selection from reality.

Harley further states that “while the map is never the reality, in such ways it helps to create a different reality” (p. 14). Even when map users are aware of the process of mapmaking, there is still a lack of critical inquiry into the representation of map features.

An early map showing world regions from Herbertson (1905) appears as if he inventoried every inch of the world and created a map delineating whole regions with distinct areas.

Herbertson himself acknowledges in his paper the existing state of knowledge is not yet complete and that the boundaries are approximate. Upon critical reflection, the impossibility of a full survey of the earth in 1905 can be realized. Despite this, Herbertson fails to visually represent any areas of uncertainty in his maps. These maps, the major medium of communication for the underlying spatial information, taken without the accompanying text, have the potential to ultimately depict a false level of data confidence through omission of visualizing areas of uncertainty, and in turn, falsely shape map users’ perceptions of the world. One cannot expect all

1

map users to read accompanying text to explain there is uncertainty. Muehrcke (1974) explains that maps are simply “cartographic caricatures” of the real world and that through generalization we must include and omit many features. He states that boundary lines, for instance, can be related more to decision criteria rather than some specific aspect of the geographic distribution.

While nature itself may have transition zones, he then asserts that maps have more definitive boundaries (p. 14), however in the past few decades many cartographers have shown there are numerous ways to represent uncertainty in visual form. This lack of uncertainty representation is found spanning maps from all time periods, continuing today, and across all sub-disciplines of geography and beyond, where in many cases it may be beneficial for map users to see.

Over 80 years after Herbertson’s article was published, Harley (1989) identifies the issues in the traditional assumption of an objective cartography, exemplified by work such as

Herberton’s maps in “The Major Natural Regions: An Essay in Systematic Geography” (1905).

Far from an objective science, cartographers subjectively choose what features and how to represent them on a map, and furthermore, which features to omit. Moreover, a vast majority of cartographers are still representing phenomena without visualizing the uncertainty for features they do choose to portray on a map. Although Harley is not the first to acknowledge a subjective cartography, his paper advances cartography towards a larger role of cartographic transparency.

Unfortunately, visualizing uncertainty is often only included in research dealing specifically on the topic, whereas its inclusion is beneficial in many mapmaking endeavors. A failure to adequately represent uncertainty has the potential to create a false sense of ‘truth’ for map users, further exacerbating naïve geographies.

This dissertation addresses visual perspectives and decision-making in geospatial uncertainty and fills a gap in current research by presenting three articles in a holistic approach spanning different areas of the field. The first paper explores the different domains of uncertainty

2

visualization. It details the development of a visual summary to both show the research field as a whole and allow other researchers to summarize their own papers in the field within a single visualization. The second paper, an introductory article for a special issue on visually-supported reasoning under uncertainty, shows the utility of the visual summary by applying it to other research articles in the issue. The final paper presents a case study on decision-making under uncertainty for an approaching hurricane in an applied real world application.

3

Chapter 1

The first paper in the dissertation presents an overview of the entire field of geospatial uncertainty visualization. A first and important step in any research field is to situate the body of research that has been done and identify potential future research directions. The typology, developed from the authors and refined based on feedback from other experts in the field, highlights smaller research domains within the entire realm and shows some of the major relationships between them. The overall usefulness of the visual summary can be shown in its ability to be used to summarize both individual papers at a glance as well as summarize the entire field. As far as my contribution, Dr. David Retchless and I spent time over a couple years meeting regularly and discussing uncertainty visualization papers as part of a reading group. We regularly discussed a typology of the field as a whole and worked together to find a consensus on the final typology. I’ve identified the various domains in the typology and presented the findings at two uncertainty visualization conference workshops I organized: COSIT 2013 and GIScience

2014. Using this feedback as well as feedback from various colleagues and my Ph.D. committee,

I created and edited a graphic to serve as the visual summary as well as the finalized typology. I wrote nearly the entire paper with edits and suggestions from both Dr. David Retchless and Dr.

Alexander Klippel. This chapter has been published as the following:

Mason, J., Retchless, D., and Klippel, A. (2016). Domains of Uncertainty Visualization Research: A Visual Summary Approach. Cartography and Geographic Information Science. 44(4), 296-309.

4

Domains of Uncertainty Visualization Research: A Visual Summary

Approach

The inherent uncertainty of geospatial data has engendered a critical research agenda addressing all facets of uncertainty visualization due to the communicative efficiency of graphical representation. To organize this broad research area, we have reviewed literature on geospatial uncertainty visualization and systematically and iteratively classified research in this field. Upon creating a classification, we developed several visual summaries over time, refining the classification and subsequent graphic as new relevant topics emerged. This visual summary extends current existing approaches to taxonomies by allowing users a quick visual overview of relevant topics in a research area at a glance. For each research paper on uncertainty visualization, this classification can be used to visually represent which domains are covered. In order to ensure that the visual summary approach and the corresponding domains developed in this article can be used reliably, we performed an inter-rater agreement task. The high agreement reveals that the domains in the classification that were identified are intuitive and can lead to objective, reproducible classifications (visual summaries) of research papers. In future research we to refine the visual classification/summary approach by providing guided classification via a web interface to visually classify the entire body of literature on geospatial uncertainty visualization and visually explore any trends in research topics, how they have changed over the years, and identify sparser topics that still need to be addressed.

Keywords: classification, uncertainty, visualization, visual summary

Introduction

The inherent uncertainty of geospatial data (e.g., Duckham, Mason, Stell, & Worboys,

2001; Fisher, 1999; Hunter & Goodchild, 1993; Zhang & Goodchild, 2002) has engendered a critical research agenda addressing all facets of uncertainty including its identification, measurement, mitigation, and communication. Over the past few decades, a large portion of geospatial uncertainty research has focused on typologies and conceptual models (Duckham et al., 2001; Gahegan & Ehlers, 2000; MacEachren et al., 2005b; Potter, Rosen, & Johnson, 2012; 5

Thomson, Hetzler, MacEachren, Gahegan, & Pavel, 2005) as well as computation of uncertainty

(Aerts, Goodchild, & Heuvelink, 2003; Chilès & Delfiner, 2009; Gahegan & Ehlers, 2000;

Journel, 1996) in order to organize, describe, and measure these numerous types of uncertainties

(e.g., accuracy, completeness, currency, imperfection, subjectivity). This foundational1 research serves as an important predecessor of newer research areas, one of which aims to communicate geospatial uncertainty through various visualization approaches. In recent decades, these visualization techniques have taken a larger role in research as users have begun to adopt geospatial uncertainty visualization due to the communicative efficiency of graphical representation. This paper organizes and classifies the body of research in geospatial uncertainty visualization. Additionally, the results of this approach are represented as visual summaries.

Visual summaries are a deeply conceptual method aimed at leveraging human information processing capacities to provide at-a-glance overviews of foci and trends in high-volume research areas.

Visual Summaries

In their seminal paper “Why a is (sometimes) worth ten thousand words”, Larkin and Simon (1987) detail their observation on information processing in relation to and why diagrams are in many situations advantageous for human information processing over verbal descriptions. They posit that diagrams retain the ‘information about the topological and geometric relations among the components,’ (66) allowing humans in many cases to search, recognize, and infer more efficiently during information processing. This is because a diagram allows for combining all of the elements together offering an easier-to-search representation that in turn simplifies recognition and inference. More specifically, it may be easier to detect patterns

1 While “foundational research” is stated in this article, it should be noted that there has been research done even earlier than those cited in the examples, with several in the early 1990s. 6

especially because of the clear presentation of information (frequently found at a single location).

The advantages of diagrams over verbal descriptions also stem from the fact that diagrams assist in reaching more effective computational processes (e.g., humans process visual information with high efficiency), thus supporting inferences and problem solving through information processing tasks.

While it is undisputed in many classic areas of cartography (Tufte & Graves-Morris,

1983) as well as in more recent developments in (Andrienko, Andrienko, &

Wrobel, 2007; Thomas & Cook, 2006), good visual representation of information is critical for efficient and effective human information processing and certain areas are intriguingly exempt from these efforts. To the best of our knowledge, there has been little work that attempts to more comprehensively summarize a field of research visually2. One exception may be self-organizing maps (SOM) (Skupin & Agarwal, 2008), which organize a group of elements from large datasets through a computational approach and outputs their relation to one another in a spatial representation. The authors liken the approach to clustering and dimensionality reduction methods. Like the SOM technique, we also attempt to cluster similar topics into groups based on their similarities. However, unlike the SOM, we developed a hierarchical classification that both represents the entire field but can also be employed to every single paper (i.e., visualize where a paper fits within the entire field). The SOM approach can also be limited in that it relies on the technique for extracting which topics or elements are to be visualized in the final output, possibly

2 This statement was made especially towards the geography research field, of which the authors are most familiar with. It is difficult to ascertain among the diverse body of research across all domains to know whether there are other visual summaries in existence. Some work, especially from the information visualization field, show some variations of different types of visual summaries and approaches for visualizing different knowledge domains (e.g., Börner, Chen, & Boyack, 2003; Börner & Theriault, 2012; Cobo, López-Herrera, Herrera-Viedma, & Herrera, 2011; Elmqvist & Tsigas, 2007; Friedman, 2014; Gansner, Hu, & Kobourov, 2009; Skupin, Biberstine, & Börner, 2013). 7

resulting in a group of topics that may not necessarily be of equal importance to the classification. Furthermore, the visual output of a SOM can be visually complex, making it harder for humans to process and search through all of the information presented in the representation. Another approach by Kinkeldey, MacEachren, and Schiewe (2014) identifies three, largely binary dimensions and provides a location-based indication of the content of a paper but they do not dive into a deeper classification of a field of research. We briefly discuss

Kinkeldey and collaborators’ approach as an example and suggest improvements.

The authors describe the field of uncertainty visualization through three popular dichotomous topics: coincident/adjacent, intrinsic/extrinsic, and static/dynamic. They further develop a cube where each axis represents one of the dichotomies. Focusing on uncertainty visualization papers that employ a user study, they apply color to one of the legs of the cube, with a total of eight possible combinations. Similar to Kindeldey’s approach, this paper also identifies three main, albeit different domains, and applies a visual diagram (visual summary) to single research papers. In contrast, the approach presented in this paper identifies numerous sub- domains and also does not impose a mutually exclusive classification. There are certainly research papers that can employ, for example, both extrinsic and intrinsic techniques, thus for these scenarios, we have developed a visual summary that allows for multiple domains to be represented in each single paper. The following sections outline the development of our classification and explain the domains that will be applied to create visual summaries.

Domains of Uncertainty Visualization Research

To organize the broad research topic of uncertainty visualization, we have reviewed literature on geospatial uncertainty visualization and systematically and iteratively classified research in this field. Upon creating a classification, we developed several visual summaries over

8

time, refining the classification and subsequent graphic as new relevant topics emerged and revising the graphic so it could visually summarize the various topics in the field as well as support the hierarchical nature of the classification. This visual summary extends current existing approaches to taxonomies that often take an entire paper to describe textually by allowing users a quick visual overview of all relevant topics in a research area at a glance. The following sections outline the development of the classification, explanations for the various domains within the classification, the creation of a graphic (visual summary) to communicate this classification, an example of the visual summary in use, validation, analysis, and future work, limitations, and conclusions.

Geospatial Uncertainty Visualization Classification

Through a systematic review, we developed an unstructured list of the popular topics mentioned throughout the varied research on geospatial uncertainty visualization. Over the course of several months, we employed a modified affinity diagramming method as utilized by

Skeels, Bongshin, Smith, and Robertson (2010) in order to assemble the content into meaningful groups. This process entails classifying individual topics into distinct groups, developing short descriptions of the groups and using them to name proto-domains for our classification system, and determining the relationships among these proto-domains to form a final hierarchy of main, secondary, tertiary, and quaternary domains. As we came across new topics and received feedback from researchers in this field, we continuously iterated the domains to address weak points and key missing topics. Within the community we are engaging with, it seems that the classification has reached a level of maturity to capture the conceptual essence of geospatial uncertainty visualization. (see Figure 1.1 for workflow)

9

Figure 1.1. Workflow for the modified Affinity Diagramming Method.

At later stages in its development, we presented this classification at two conferences, eliciting feedback from various researchers in this field and further revised the classification. The final version presented here is the product of a thorough evaluation of the topics that we argue covers important aspects of uncertainty visualization research in the GIScience field. We do not contend this is a completely exhaustive list, but rather an account of prominent and important concepts widely discussed in geospatial uncertainty visualization. Our classification has a different, and we would argue, new component compared to existing taxonomies. The majority of existing taxonomies in the field focus more on different types of uncertainties with a heavy emphasis on types of data (Duckham et al., 2001; MacEachren et al., 2005b; Thomson et al.,

2005) or on different visualization techniques (Pang, Wittenbrink, & Lodha, 1997; Sanyal,

Zhang, Bhattacharya, Amburn, & Moorhead, 2009) rather than a classification organizing the entire field of geospatial uncertainty visualization.

Domains Defined

The final classification presented here comprises three secondary domains: user effects, visualization techniques, and stimulus effects. These secondary domains are related across two

10

main domains: the domain of the human user (including the user effects and stimulus effects secondary domains) and the domain of the computer-generated visualization (visualization techniques secondary domain). There are two elements in the user domain because the first describes the user’s state prior to viewing and interacting with the visualization, and the second describes the user’s actions and state post viewing and interacting with the visualization. The prior (user effects) can be thought of as answering: what knowledge, heuristics, etc. does the user bring with them to the reading, interpretation, and/or analysis task? The post (stimulus effects) can be thought of as answering: given these prior user characteristics and the characteristics of the visualization, how will the user respond to, understand, and utilize the visualization? We can describe this model as user centered, since it places the visualization in the context of the user's existing characteristics and subsequent response. The visualization techniques secondary domain refers to the various components and approaches to visualizing or organizing uncertainty visualizations. Several relevant tertiary domains are included within each secondary domain, describing various related sub-topics within each. 1.1 outlines the different domains and a brief description of their tertiary domains. The rest of this section will describe each domain in more detail.

Table 1.1. Domains of uncertainty visualization research (User Effects, Visualization Techniques, and Stimulus Effects), their tertiary domains, and brief descriptions of each. Secondary Tertiary domain Description domain User effects General individual differences General individual differences of the user (e.g. personality, abilities, and heuristics) Contextual individual differences Individual differences related to the context/phenomena being visualized (e.g. heuristics and prior experience) Visualization Data Type Visualization of uncertainty through a point, line, techniques (Point, Line, Polygon, Network, polygon, network, and/or field.* Field) Taxonomy or Typology Overview or description of uncertainty through a formal or informal taxonomy or typology, or list.**

11

Representation Visualization of uncertainty using an intrinsic and/or extrinsic technique.* Evaluation Evaluation (formal or informal) in the research paper of an uncertainty visualization technique. Interactivity Interactive and/or non-interactive visualizations.* Animation Animated and/or static visualization.* Display Adjacent and/or coincident displays showing uncertainty.* Stimulus User Comprehension (Map or A behavioral evaluation assessing if a user Effects Actual Data Uncertainty) comprehends either the map (through various simple map reading tasks) or a deeper understanding of the actual uncertainties associated with the data.* Affect Uncertainty visualization eliciting some type of affect or emotional response including trust, confidence, worry, anxiety, etc. Decision-Making Research that assesses the impact of the uncertainty visualization on user decisions.

* It is possible that multiple tertiary domains within the same topic may be used in a research paper when a researcher employs them in either the same visualization, a comparison of visualizations, or a multi- stage research project (e.g., Viard, Caumon, & Lévy, 2011), they display uncertainty by comparing both adjacent and coincident approaches).

** When this is included in a research paper, all other tertiary domains besides evaluation are ignored in the visualization techniques domain section as it is expected that taxonomies/typologies will cover various visualization techniques and data types.

The user effects secondary domain distinguishes two types of individual differences that a user may embody (prior to interaction) which ultimately affect how they interact with an uncertainty visualization. General refers to individual differences such as abilities, heuristics, and personalities that are not directly related to the phenomena being mapped. These differences are sometimes assessed using standardized tests in a study and can be employed in research to determine if they influence or impact how a user interacts with the visualization. More specifically, these individual differences do not necessarily arise from exposure to or experience with the mapped phenomena, but rather are relevant for how individuals will interact and behave

12

in that experience. For example, numeracy abilities may allow an individual to better grasp probabilities in general, thus allowing them to better estimate the likelihood and risks of an approaching hurricane (the actual phenomena being mapped in this example). On the other hand, contextually relevant individual differences (see the other tertiary domain) are those directly related to the phenomena being mapped. Thus, prior experience with a hurricane (an experience directly connected to the mapped phenomena) may affect the way a user interacts with and responds to a map of hurricane storm surge flooding probabilities as well as the decisions he/she makes.

Within the visualization3 techniques section, seven tertiary domains are identified: data type, taxonomy/typology, representation, evaluation, interactivity, animation, and display. Five data types are further distinguished (i.e. point, line, polygon, field, and network), with each representing one type of data that a research article may utilize for uncertainty visualization.

Taxonomy refers to any article that presents some sort of taxonomy, typology, or list of uncertainty visualization including its approaches. The representation section specifies two ways to explicitly represent the data uncertainty as proposed by Gershon (1998): extrinsic or intrinsic.

Extrinsic techniques incorporate new geometric objects to represent the uncertainty (e.g., adding arrows, bars, noise annotation lines) while intrinsic techniques incorporate the uncertainty within the existing object (e.g., altering brightness, color, blur, transparency) (Deitrick, 2013). The evaluation tertiary domain indicates whether an article has evaluated a specific visualization approach. For example, a study may employ a new method or an existing method to a new area and the researchers may evaluate the effectiveness of the technique or the impacts it may have on the user. Zuk and Carpendale (2006) also utilize evaluation through assessing several uncertainty visualizations by comparing principles outlined from Bertin (1973), Tufte (2001), and Ware

3 Note that all non-visual methods are not included in this approach. 13

(2004) without undertaking an actual behavioral study. Interactivity and animation refer to the methods of visualizing uncertainty, whether it is interactive or non-interactive and animated or static approaches respectively. Finally, display identifies whether the visualization utilized a coincident or adjacent (side-by-side) display(s) to present the data and its underlying uncertainties.

The final secondary domain describes effects the stimulus (i.e., the uncertainty visualization) has on the user, including user comprehension (of the map or data), affect, and decision-making. We contend that many evaluation methods of uncertainty visualization only assess basic map-reading skills (comprehension of the map) with only a select few evaluating if users truly understand the deeper uncertainties inherent in the data (actual comprehension of the data). Smith, Retchless, Kinkeldey, and Klippel (2013) describe these differences as a surface or deeper comprehension of uncertainty attained by a map user. Many studies only evaluate participants' surface understandings of uncertainty. A deeper comprehension of uncertainty may include understanding the distribution or source of the uncertainty, how uncertainty is introduced in data (propagating from data collection, to transformations, to its visualization) (e.g., Pang et al., 1997), thinking about multiple sources of uncertainty, and combining multiple uncertainties going beyond a simple probability. For example, Aerts, Clarke, and Keuper (2003) asked participants to identify areas of urban growth, where users matched colors from the uncertainty map to the legend to find its associated uncertainty, only using basic map-reading skills without ascertaining whether users actually comprehend the intricacies of the uncertainty. The second tertiary domain specifies research assessing the affect of a visualization on a user including the elicitation of an emotional response (e.g. trust, confidence, worry, anxiety). The final sections focus on the impact an uncertainty visualization may have on decision-making or reasoning

14

processes a user employs upon interaction. Decision-making4 includes studies that evaluate how a visualization impacts actionable decisions rather than simple map reading choices such as color preferences.

Developing a Visual Summary from the Classification

A graphic (visual summary) was iteratively developed (Figure 1.2) reflecting this classification in order to both organize and conceptualize the research field in a new way and to effectively and efficiently assist readers in grasping the topics within an uncertainty visualization research paper at a glance.

4 Another article published while this paper was under review is a relevant overview on decision-making under uncertainty: Kinkeldey, C., et al. (2015). "Evaluating the effect of visually represented geodata uncertainty on decision-making: systematic review, lessons learned, and recommendations." Cartography and Geographic Information Science: 1-21.

15

Figure 1.2. Domains of uncertainty visualization research divided into three main categories. (A) Blue: Visualization Techniques, (B) green: User Effects, and (C) purple: Stimulus Effects. The graphic is split into multiple parts for text legibility.

Figure 1.3 presents the process of a user (see section: Domains Defined) interacting with an uncertainty visualization starting from the user’s prior knowledge and skills (user effects), the interaction with the actual visualization including the specification of the visualization techniques employed (visualization techniques), and the impact that the stimulus has on the user

16

such as comprehension, decision-making, and affect (stimulus effects). As this can be a learning process by which stimulus affects understanding, the stimulus effects domain is further linked to the user effects domain because the visualization can change and develop current knowledge and heuristics that the user will bring to similar visualizations in the future.

Figure 1.3. Uncertainty Domains Process

To develop this graphic, we sought to create a visual that still preserved the cyclical nature of this process by placing the visualization techniques between the user effects and stimulus effects secondary domains and separating tertiary domains within each secondary domain in the outer rings to preserve the hierarchy. While a treemap (e.g., Shneiderman &

17

Plaisant, 1998) visualization can offer an overview of a research area including its hierarchies, a circular shape was employed to maintain the aforementioned process and relationship among the secondary domains5. Also note that the representation we developed to visually summarize classified articles allows for some flexibility in case the classification needs refinement in the future. Currently, the graphic (Figure 1.2) shades in all sections with the corresponding domain hue. However, when applying the classification to an existing research paper, the graphic will only shade in those sections that the paper actually covers to provide a quick overview of the contents of a paper at a glance. The following section will further elucidate this concept.

Example Application and Analysis

For each research paper on uncertainty visualization, this classification can be used to visually represent which domains are covered. Subsequently, the selected tertiary domains will be colored in the visualization, one for each research article, providing users with a quick overview to help users recognize the contents of an article quickly before reading in further detail. It is important to note however, that some tertiary domains (e.g., intrinsic vs. extrinsic) may not be mutually exclusive within a research project, that is, they may both be used in different stages of a larger research process or compared in an article, thus resulting in both sections highlighted for a single article. The visual summary also offers a different way to conceptually organize the field of uncertainty visualization research through a systematic

5 The final visualization was inspired by the design of a sunburst found on the https://d3js.org/ webpage. Sunbursts are excellent for visualizing hierarchical data and an article by Stasko et. Al (2000) compared tree maps to sunbursts and found overall better task performance and preference for the sunburst visualization method though over time treemap performance improved.

Stasko, J., Catrambone, R., Guzdial, M., & McDonald, K. (2000). An evaluation of space-filling information visualizations for depicting hierarchical structures. International journal of human- computer studies, 53(5), 663-694.

18

organization of the literature. To date we have analyzed and visually summarized over 50 articles, with a subset of 18 results in the appendix.

As an example application of the approach laid out in this paper, a publication by Finger and Bisantz (2002) has been assessed using the uncertainty domains classification and a visual summary was created (see Figure 1.4). At a quick glance, it is easy to see the shaded regions to identify what topics their paper covers and which it does not. As no information was collected on the prior experience or individual differences of the users, the User Effects section was left unshaded. The research evaluates intrinsic points (glyphs) on a single coincident display representing uncertainty that show a range of hostile to friendly icons. The first experiment attempted to determine whether users could distinguish differences among various static icon representations. Participants sorted (into piles), ordered (least to most friendly or hostile) and rated (on a continuous scale) icons to see if they could identify the various uncertainties and where they lie on a scale respective of one another. This task allowed the researchers to ascertain the ability of participants to distinguish and separate icons on a scale, however whether or not users actually understand uncertainty has yet to be determined. The second experiment asked participants to identify point objects as hostile or friendly in an animated and interactive environment under time constraints. One of four conditions was presented to a user: degraded image, degraded image with probability (in instructions or in actual experiment), and probability with icon (not degraded). Over time, the probability of the objects changed and became more certain and participants decided whether the object was hostile or friendly. These two experiments focused more on intuition of icon meaning and sorting of icons in an ordinal manner despite what they actually symbolized (uncertainty). It seems that users would still be able to rank the icons in a similar way given a different variable (other than uncertainty), thus the research focused more on understanding the map and graphical components rather than the

19

actual uncertainties of the data. Therefore, only the map comprehension region has been shaded rather than the comprehension of the uncertainty (data tertiary domain). The decision-making region is not shaded because while the experiment asked users to make a decision on whether or not an icon on a screen was friendly or hostile, these are more related to map reading decisions rather than actionable decisions. Finally, taxonomy and affect were also unshaded as they were not covered in this research.

Figure 1.4. Visual summary for Finger and Bisantz (2002).

Validation of Visual Summaries

Summarizing articles can have a subjective element. In order to ensure that the visual summary approach and the corresponding domains developed in this article can be used reliably, we performed an inter-rater agreement task. Two of the authors summarized 20 articles

20

independently of each other. Afterwards the summaries of both raters were compared by calculating Cohen’s kappa (Cohen, 1960). This measure takes into account both the agreement between both raters as well as the probability of a chance agreement to compute a value for the inter-rater agreement. This measure ranges from 0 (complete disagreement) to 1 (complete agreement), indicating the level of inter-rater similarity. Upon calculating the extremely high inter-rater agreement (Cohen’s kappa) of 0.89, the two authors were able to nearly replicate classifications. To examine the usability further, the exercise was expanded to include 40 articles with an outside researcher undertaking the classification. The final Cohen’s Kappa of 0.62 still indicates a substantial agreement between the outside researcher and the authors, revealing that the domains identified above are intuitive and can lead to objective, reproducible classifications

(visual summaries) of research papers. We also looked into sources of disagreement in more detail. One of the more frequent disagreements occurred in the domain data comprehension. The confusion seems to have been to decide whether research focuses on a user actually comprehending the data uncertainty versus only reading a map in a behavioral study. Taxonomy was another domain that more frequently was a cause for disagreement and also influences subdomain classifications. We identified the reason for this disagreement as being twofold: First, some articles do not produce formal taxonomies but rather lists and reviews of uncertainty visualization approaches Riveiro (2007); E. M. Stephens, T. L. Edwards, and D. Demeritt

(2012b). Second, though we state in the descriptions to include those papers with more informal taxonomies and lists, when a user does not follow this rule it can lead to numerous mismatched domains. For example, if an article is not classified as having a taxonomy (even if it should have), a user can then include all of the visualization techniques, etc. even if the paper is describing other articles and approaches that are not undertaken in the article of interest. Our approach to provide more reliable definitions is to stress the nature of taxonomies in our

21

definition and to provide examples for both formal and informal taxonomies. Another misunderstanding occurred in the tertiary domain evaluation and what types of evaluations should be considered. This domain includes evaluations of other uncertainty visualization approaches or studies as well as undertaking behavioral evaluations in the article of interest. We explained that it does not include papers that mention other studies that have evaluations.

Reflecting upon the findings from the inter-rater agreement exercise, we addressed these issues by clarifying the definitions and providing examples of each such that future raters may not encounter this problem.

More Analysis

In order to demonstrate the value of this approach, we also provide a summary graphic

(see Figure 1.5). After visually summarizing 40 articles including the papers in the Appendix, we assessed how often each tertiary domain has been addressed by summing over all entries for each tertiary domain in the corresponding database. Figure 1.5 uses transparency to provide an indication of frequencies by shading all sections with a black hue and subsequently applying a percent opacity based on the percentage of articles that tertiary domain was covered among all articles. This figure makes it easy to assess topical foci in uncertainty visualization research, especially as we incorporate more articles into the final classification. While we did not quantitatively document and compare the change in domains over time, we have discovered several important things. As seen by the darker shades, the majority of research has focused on visualization techniques, coincident displays, evaluations, and map comprehension. There is a notable lack of research saturation including network-based representations, only a select few research projects actually focusing on user effects, a small number focusing on the affect and a large majority of projects only measuring “map” comprehension not data comprehension. While actually evaluating data comprehension is a difficult thing to do, this should inspire future 22

researchers to develop and refine current visualizations to better help users comprehend a deeper uncertainty and stimulate newer evaluation methods to better measure this. Furthermore, it is apparent that more focus should be taken on individual differences and users’ prior knowledge and experiences to understand how this may impact how users may interact with uncertainty visualizations.

Figure 1.5. The figure shows a summary of research papers that were analyzed and visually summarized, that is, a total of 40 papers. To create this graphic we simply summed over all visual summaries creating a number for each tertiary domain that reflects how often it has been addressed in research papers.

23

Conclusions and Outlook

In this article we proposed a conceptual approach to address the challenge of coping with the proliferation of research papers. First, we developed a methodological approach to gain deep insights into a field of research, a modified iterative affinity diagramming method. Second, we visually represented the results of classifying research papers using graphics based on principles of good and an iterative design process (see discussion in Section: Developing a visual summary from the classification). This combination allows for effective and efficient access to the content of individual articles as well as an understanding of a field of research. In other larger research fields, applying this methodology can certainly be challenging as the number of domains may prove to be too numerous for a graphic like this. However, other visualizations and methods can be employed such as including interactivity so that users can click on domains and explore sub-domains on demand. Furthermore, depending on the intentions of the researchers, one can also presume that it is entirely possible to group a large number of categories into one encompassing category to prevent an overabundance of domains (e.g., rather than list data types such as point, line, polygon, and field, one could simply create a vector and raster category and compress four categories into one if deemed logical).

This classification may not be completely exhaustive but reflects what topics we, as researchers in uncertainty visualization, view as prominent and important in the field. The research has been presented (not published) at several workshop venues and also reflects intensive discussion among ourselves and feedback from other researchers on how to organize the field of uncertainty visualization research. Clearly, even smaller topics can be included until the classification encompasses dozens of topics, however, they may not be useful to the purpose of this classification: to organize the field in a meaningful way and to help users identify major and important topics in a single research paper. Furthermore, this approach does not take into

24

account different types of uncertainties as traditional classifications have previously concentrated upon. We purposely sought to create a new taxonomy focused more on the process of interaction between the user and the visualization rather than the specific types of uncertainties, which are often still debated in research today. In future research we hope to visually classify (i.e. recreate the visual summary) all of the geospatial uncertainty visualization papers we can find and visually explore any trends in research topics, how they have changed over the years, and potentially identify sparser topics that still need to be addressed.

We are currently building a website that will allow researchers in geospatial uncertainty visualization to generate a graphic from this classification for their own research papers as well as adding their papers and information to the database we have created. The current prototype allows users to interactively select which tertiary domain their research paper covers and automatically generate a graphic for them to use for their own purpose. We plan to later build in a search function so that visitors can identify articles that meet whatever domains they are interested in.

Design and conceptual aspects

We incorporated various aspects of good design and also went through an iterative design process with feedback from many skilled and professional designers. There are some aspects that we did not include in the current design that may become important or desirable in the future.

Our top-level includes three secondary domains: visualization techniques, user effects, stimulus effects. For the time being, each secondary domain has been given the same space in the visual summary resulting in some unintentional visual side effects. Visualization techniques contains the largest number of tertiary and quaternary domains. The result is that individual distinctions in the tertiary and quaternary domains receive less visual real estate. It is conceivable to have two alternative approaches. The first would assume that all distinctions on the finest level 25

of granularity are equally important. In this case, the design process, that is, the allocation of visual real estate would start with the finest level of granularity (here: the quaternary domains), calculates the required space each assuming equal size and allocates size of superordinate domains accordingly. The second approach would assume that not all domains and associated subordinate domains are equally important. A difference in importance can be expressed in a difference in size of domains and their subordinate domains. While both approaches are possible, we decided for the time being to start by allocating the three secondary domains the same weight but we do not exclude other options for future research6.

One intriguing aspect that we would like to discuss briefly is the question of how many visual summaries each article should receive. Especially in articles with behavioral experiments it may not be uncommon, for example, that two experiments with different foci are reported. To demonstrate this challenge see Figure 1.4. The article by Finger and Bisantz (2002) discusses two experiments with different foci. Figure 1.4 shows how the article would be classified using one visual summary while Figure 1.6 shows two separate visual summaries, one for each experiment in the article. For the time being we have selected the one-summary-per-article approach but once the scientific community is using our online tool and the collection of data is distributed to interested individuals, a finer level of analysis is conceivable.

6 Stasko et al. also address that you can only compare two “pie slices” on the same level unless you look at their respective angles that they subtend (p. 2).

Stasko, J., Catrambone, R., Guzdial, M., & McDonald, K. (2000). An evaluation of space-filling information visualizations for depicting hierarchical structures. International journal of human- computer studies, 53(5), 663-694.

26

Figure 1.6. Visual summaries for Finger and Bisantz (2002), one for each experiment (experiment one on the left and experiment two on the right).

A final conceptual aspect is the question of binary versus ordered versus continuous domains. For the time being we have focused on a binary classification approach for the following reasons: a) most of the domains we identified fall naturally into an either or distinction such as dynamic versus static, or adjacent versus coincident, or extrinsic versus intrinsic. There are a few sub-domains for which it could be argued that finer distinctions may be possible and/or desirable, for example, interactivity. We could make a distinction between no, low, medium, and high interactivity. At this point in time we do not see the need for such a distinction. The added complexity of the classification would have disadvantages of potentially not leading to the same classifications (by different summarizers).

27

Extended analysis

This article provides some first insights into the analysis possibilities that visual summaries provide. We discussed the advantages of being able to grasp the main content of an article in an instant (see Figure 1.4), we briefly demonstrated that a comparison of many articles in a particular field (here uncertainty visualizations) allows for creating an understanding of developments in the field and to identify gaps specific to this classification in research. This analysis can be supported by simple summary graphics (see Figure 1.5) that provide overview and quantification of topical foci.

There are several avenues for more detailed analyses, some of which we will briefly discuss. One potential advantage of cube-like visualizations (e.g., Kinkeldey, MacEachren, et al.

(2014)) in comparison to visual summaries is that the similarity of articles is directly accessible through the distance-similarity metaphor, that is, the closer the articles are in a cube the more similar they should be. While this is not directly possible with visual summaries, the classification data used to create visual summaries can also be used to calculate the similarities of research papers. This can be achieved through un-weighted approaches or by weighting the data according to, for example, domains/sub-domains. In an un-weighted approach, we could use the Hamming distance (Gusfield, 1997) to calculate the dissimilarity between papers. The

Hamming distance would treat the entries in the database as a string and by comparing individual entries (‘0’ or ‘1’) count the number of same versus different entries. Once a dissimilarity value has been established, a variety of approaches are available to further process the data such as cluster analysis or multidimensional scaling.

There are opportunities to find more sophisticated solutions such as Skupin and Agarwal

(2008) self-organizing map method as discussed earlier in the article. Aside from attempting to synthesize articles with a single summary or visual output, we hope to begin analyzing change

28

over time across these different domains once the database has been built. It would be interesting to identify how various areas have proliferated and which have been the foci at certain points in time. Additionally, we hope to identify areas that still need to be attended to within this classification, such as the lack of geospatial network uncertainty visualization approaches as identified in the analysis section.

Analyses, both basic and advanced versions could be enhanced through more details collected during the visual summary process as well as resources found on the web. It might be desirable to collect information regarding the affiliation of authors to better understand regional differences and collaborations across disciplinary boundaries. It would also be possible to include information extracted from websites such as Google Scholar to add citation information for articles to explore further analyses.

Refinement of the research methodology

In the section Geospatial Uncertainty Visualization Classification we have described the research methodology we applied to collect domains as well as sub-domains. The process so far has relied exclusively on natural cognitive agents, that is, through reading, summarizing, analyzing, and extensively discussing articles and preliminary classifications, lists of terms and concepts have been identified and refined. Other research efforts that aim at organizing a field of research have recently adopted approaches from text processing and machine learning. It is conceivable that a combination of approaches, that is, human analysis guided by machine learning and text processing would allow for a more rapid agreement on the most important and popular terms and concepts in a field as well as their hierarchical organization. While there is room for enhancement, we believe that the approach we have taken both in the development of a classification and the visual summary output will provide an excellent approach for other researchers attempting to summarize other research fields and help users digest literature at a 29

conceptual level.

30

References

Aerts, Jeroen C. J. H., Keith. Clarke, and Alex. Keuper. 2003. "Testing Popular Visualization

Techniques for Representing Model Uncertainty." Review of. Cartography and

Geographic Information Science 30 (3):249-61.

Aerts, Jeroen C. J. H., Michael F. Goodchild, and Gerard B. M. Heuvelink. 2003. "Accounting

for Spatial Uncertainty in Optimization with Spatial Decision Support Systems." Review

of. Transactions in GIS 7 (2):211-30.

Andrienko, Gennady, Natalia Andrienko, and Stefan Wrobel. 2007. "Visual analytics tools for

analysis of movement data." Review of. SIGKDD Explor. Newsl. 9 (2):38-46. doi:

10.1145/1345448.1345455.

Bertin, Jacques. 1973. "Sémiologie graphique: Les diagrammes-Les réseaux-Les cartes." Review

of.

Bisantz, Ann M., Richard T. Stone, Jonathan. Pfauta, Adam. Fouse, Michael. Farry, Emilie.

Roth, Allen L. Nagy, and Gina. Thomas. 2009. "Visual Representations of Meta-

Information." Review of. Journal of Cognitive Engineering and Decision Making 3

(1):67-91.

Chilès, Jean-Paul., and Pierre. Delfiner. 2009. Geostatistics: Modeling Spatial Uncertainty. Vol.

497: John Wiley & Sons, Inc.

Cohen, Jacob. 1960. "A coefficient of agreement for nominal scales." Review of. Educational

and psychological measurement 20 (1):37-46.

Deitrick, Stephanie. 2012. "Evaluating implicit visualization of geographic uncertainty for public

policy decision support." Review of. Proceedings AutoCarto 2012. AutoCarto 2012:16-8.

31

———. 2013. "Uncertain Decisions and Continuous Spaces: Outcomes Spaces and Uncertainty

Visualization." In Understanding Different Geographies, 117-34. Springer Berlin

Heidelberg.

Deitrick, Stephanie., and Robert. Edsall. 2006. "The Influence of Uncertainty Visualization on

Decision Making: An Empirical Evaluation." In Progress in Spatial Data Handling,

edited by Andreas. Riedl, Wolfgang. Kainz and Gregory A. Elmes, 719-38. Netherlands:

Springer Berlin Heidelberg.

Duckham, Matt., Keith. Mason, John. Stell, and Mike. Worboys. 2001. "A formal approach to

imperfection in geographic information." Review of. Futurescapes 25 (1):89-103.

Finger, Richard., and Ann M. Bisantz. 2002. "Utilizing graphical formats to convey uncertainty

in a decision-making task." Review of. Theoretical Issues in Ergonomics Science 3 (1):1-

25.

Fisher, Peter F. 1999. "Models of uncertainty in spatial data." Review of. Geographical

Information Systems 1:191-205.

Gahegan, Mark., and Manfred. Ehlers. 2000. "A Framework for the Modelling of Uncertainty

Between Remote Sensing and Geographic Information Systems." Review of. ISPRS

Journal of Photogrammetry and Remote Sensing 55 (3):176-88.

Gershon, Nahum. 1998. "Visualization of an Imperfect World." Review of. IEEE Computer

Graphics and Applications 18 (4):43-5.

Gusfield, Dan. 1997. Algorithms on strings, trees and sequences: computer science and

computational biology: Cambridge university press.

Hope, Sue., and Gary. J. Hunter. 2007. "Testing the effects of positional uncertainty on spatial

decision‐making." Review of. International Journal of Geographical Information

Science 21 (6):645-65.

32

Hunter, Gary J., and Michael F. Goodchild. 1993. "Managing Uncertainty in Spatial Databases:

Putting Theory into Practice." Review of. Journal of Urban and Regional Information

Systems Association 5 (2):52-62.

Journel, A. G. 1996. "Modelling uncertainty and spatial dependence: Stochastic ."

Review of. International Journal of Geographical Information Systems 10 (5):517-22.

Kinkeldey, Christoph, Alan M. MacEachren, and Jochen Schiewe. 2014. "How to Assess Visual

Communication of Uncertainty? A Systematic Review of Geospatial Uncertainty

Visualisation User Studies." Review of. The Cartographic Journal 51 (4):372-86. doi:

doi:10.1179/1743277414Y.0000000099.

Larkin, Jill H., and Herbert A. Simon. 1987. "Why a Diagram is (Sometimes) Worth Ten

Thousand Words." Review of. Cognitive Science 11 (1):65-100.

Leitner, Michael., and Barbara P. Buttenfield. 2000. "Guidelines for the Display of Attribute

Certainty." Review of. Cartography and Geographic Information Science 27 (1):3-14.

MacEachren, Alan M. 1992. "Visualizing Uncertain Information." Review of. Cartographic

Perspectives 13 (Fall):10-9.

MacEachren, Alan M., Anthony Robinson, Susan Hopper, Steven Gardner, Robert Murray, Mark

Gahegan, and Elisabeth Hetzler. 2005. "Visualizing Geospatial Information Uncertainty:

What We Know and What We Need to Know." Review of. 32 (3):139-60.

Pang, Alex T., Craig M. Wittenbrink, and Suresh K. Lodha. 1997. "Approaches to Uncertainty

Visualization." Review of. The Visual Computer 13 (8):370-90.

Potter, Kristin., Paul. Rosen, and Chris R. Johnson. 2012. "From Quantification to Visualization:

A Taxonomy of Uncertainty Visualization Approaches." In Uncertainty Quantification in

Scientific Computing, edited by Andrew M. Dienstfrey and Ronald F. Boisvert, 226-49.

Springer Berlin Heidelberg.

33

Retchless, D. 2012. Mapping Climate Change Uncertainty: Effects on Risk Perceptions and

Decision Making. Paper presented at the AGU Fall Meeting Abstracts.

Riveiro, Maria. 2007. "Evaluation of Uncertainty Visualization Techniques for Information

Fusion." In 10th International Conference on Information Fusion, 1-8. Quebec, Canada:

IEEE Press.

Roth, Robert E. 2009. "The Impact of User Expertise on Geographic Risk Assessment under

Uncertain Conditions." Review of. Cartography and Geographic Information Science 36

(1):29-43.

Sanyal, Jibonananda., Song. Zhang, Gargi. Bhattacharya, Phil. Amburn, and Robert J. Moorhead.

2009. "A user study to compare four uncertainty visualization methods for 1d and 2d

datasets." Review of. Visualization and , IEEE Transactions on 15

(6):1209-18.

Sanyal, Jibonananda., Song. Zhang, Jamie. Dyer, Andrew. Mercer, Philip. Amburn, and Robert

J. Moorhead. 2010. "Noodles: A Tool for Visualization of Numerical Weather Model

Ensemble Uncertainty." Review of. IEEE Transactions on Visualization and Computer

Graphics 16 (6):1421-30.

Senaratne, Hansi., Lydia. Gerharz, Edzer. Pebesma, and Angela. Schwering. 2012. "Usability of

Spatio-Temporal Uncertainty Visualisation Methods." In Bridging the Geographic

Information Sciences, edited by Jerome. Gensel, Didier. Josselin and Danny.

Vandenbroucke, 3-23. Springer Berlin Heidelberg.

Shneiderman, Ben, and . 1998. "Treemaps for space-constrained visualization

of hierarchies." In.

34

Skeels, Meredith., Lee. Bongshin, Greg. Smith, and George G. Robertson. 2010. "Revealing

Uncertainty for Information Visualization." Review of. Information Visualization 9

(1):70-81.

Skupin, André, and Pragya Agarwal. 2008. "Introduction: What is a Self-Organizing Map?" In

Self-Organising Maps, 1-20. John Wiley & Sons, Ltd.

Smith, Jennifer., David. Retchless, Christoph. Kinkeldey, and Alexander. Klippel. 2013. Beyond

the Surface: Current Issues and Future Directions in Uncertainty Visualization Research.

Paper presented at the International Cartographic Conference, Dresden, Germany.

Spiegelhalter, David., Mike. Pearson, and Ian. Short. 2011. "Visualizing Uncertainty About the

Future." Review of. Science 333 (6048):1393-400.

Stephens, Elisabeth m., Tamsin L. Edwards, and David. Demeritt. 2012. "Communicating

probabilistic information from climate model ensembles—lessons from numerical

weather prediction." Review of. Wiley Interdisciplinary Reviews: Climate Change 3

(5):409-26. doi: 10.1002/wcc.187.

Thomas, James J, and Kristin A Cook. 2006. "A visual analytics agenda." Review of. Computer

Graphics and Applications, IEEE 26 (1):10-3.

Thomson, Judi R., Elizabeth G. Hetzler, Alan M. MacEachren, Mark N. Gahegan, and Misha

Pavel. 2005. A Typology for Visualizing Uncertainty. Paper presented at the Conference

on Visualization and Data Analysis, San Jose, California.

Tufte, Edward R. 2001. The visual display of quantitative information. Cheshire, Conn.:

Graphics Press.

Tufte, Edward R, and P R Graves-Morris. 1983. The visual display of quantitative information.

Vol. 2: Graphics press Cheshire, CT.

35

Viard, Thomas, Guillaume Caumon, and Bruno Lévy. 2011. "Adjacent versus coincident

representations of geospatial uncertainty: Which promote better decisions?" Review of.

Computers and Geosciences 37 (4):511-20.

Ware, Colin. 2004. Information Visualization: Perception for Design: Morgan Kaufmann

Publishers Inc.

Wittenbrink, Craig M., Alex T. Pang, and Suresh K. Lodha. 1996. "Glyphs for visualizing

uncertainty in vector fields." Review of. IEEE Transactions on Visualization and

Computer Graphics 2 (3).

Zhang, Jingxiong, and Michael F Goodchild. 2002. Uncertainty in geographical information:

CRC press.

Zuk, Torre., and Sheelagh. Carpendale. 2006. Theoretical analysis of uncertainty visualizations.

Paper presented at the SPIE, San Jose, CA.

36

Appendix A. Papers Classified Under the Uncertainty Domains Visualization7

(MacEachren, 1992) (Howard & (Wittenbrink, Pang, (Pang et al., 1997) MacEachren, 1996) & Lodha, 1996)

(Gershon, 1998) (Leitner & (Finger & Bisantz, (Lodha, Charaniya, Buttenfield, 2000) 2002) Faaland, & Ramalingam, 2002)

(Aerts, Clarke, et (Johnson & (MacEachren et al., (Thomson et al., al., 2003) Sanderson, 2003) 2005a) 2005)

7 Additional visual summaries are now included beyond the 18 published in the original article. These graphics are sorted by year from oldest to the most recent. At least in this cursory classification and temporal sorting, it appears there are no distinct patterns over the past couple decades of research. Keeping in mind that this is not an exhaustive classification of all geospatial uncertainty visualization articles, there may be patterns present over the years that are not immediately clear. 37

(Deitrick & Edsall, (Zuk & Carpendale, (Dooley & Lavin, (Hope & Hunter, 2006) 2006) 2007) 2007)

(Riveiro, 2007) (Allendes Osorio & (Bisantz et al., (Gerharz & Brodlie, 2008) 2009) Pebesma, 2009)

(Roth, 2009) (Sanyal et al., 2009) (Boller, Braun, (Sanyal et al., Miles, & Laidlaw, 2010) 2010)

(Bisantz, Cao, (Kubíček & (Spiegelhalter, (Viard et al., 2011) Jenkins, & Šašinka, 2011) Pearson, & Short, Pennathur, 2011) 2011)

38

(Boukhelifa, (Brodlie, Osorio, & (Deitrick, 2012) (Potter et al., 2012) Bezerianos, Lopes, 2012) Isenberg, & Fekete, 2012)

(Retchless, 2012) (Senaratne, (E. M. Stephens, T. (Brus, Voženílek, Gerharz, Pebesma, Edwards, L., & D. & Popelka, 2013) & Schwering, Demeritt, 2012a) 2012)

(Smith et al., 2013) (Stoll, Krüger, Ertl, (Vullings, Blok, (Kinkeldey, Mason, & Bruhn, 2013) Wessels, & Bulens, Klippel, & 2013) Schiewe, 2014)

39

(Kinkeldey, (Kinkeldey, (Ruginski et al., (Şalap-Ayça & MacEachren, et al., MacEachren, 2016) Jankowski, 2016) 2014) Riveiro, & Schiewe, 2015)

40

Chapter 2

This second paper is an introductory article to a special issue on uncertainty visualization to support reasoning and decision-making in Spatial Cognition and Computation. In response to two successful workshops on uncertainty visualization that I organized at both COSIT 2013 and

GIScience 2014, I led a special issue to gather novel research from the field. Three articles were chosen for inclusion and the editors, myself included, wrote the introduction article. The article first outlines current research and issues related to using uncertainty visualization to support reasoning and decision-making and then describes the subsequent three articles in the issue. The paper utilizes the visual summary as shown in paper one to show both the typology and the utility of its graphic representation for quickly summarizing each article within the field as a whole and also showing their unique contributions to the different sub-domains. It also allows for direct comparison between the three articles and visual identification of research topics not covered from the field. As far as my contribution to the introduction, I personally wrote the entire article and requested edits thereafter from the other editors for the article. This chapter has been published as the following:

Mason, J., Klippel, A., Bleisch, S., Slingsby, A., and Deitrick, S. (2016). Approaching Spatial Uncertainty Visualization to Support Reasoning and Decision-Making. Spatial Cognition and Computation: An Interdisciplinary Journal. Special Issue on Visually-Supported Reasoning with Uncertainty. 16(2), 97-105.

41

Approaching Spatial Uncertainty Visualization to Support Reasoning and

Decision-Making

While research on uncertainty and decision-making has a long history across several disciplines, recent technological developments compel researchers to rethink how to best address and advance the understanding of how humans reason and make decisions under spatial uncertainty. This introduction presents a visual summary graphic to provide an overview of each article in this special issue. Upon viewing these visual summaries, the reader will find that each of these articles covers different topics in the uncertainty visualization domain, offering complementary research in this field. Extending this body of research and finding new ways to explore how these visualizations may help or hinder the analytical and reasoning process of humans continues to be a necessary step towards designing more effective uncertainty visualizations to support reasoning and decision-making.

Keywords: decision-making, reasoning, uncertainty visualization

Introduction

Uncertainty is ubiquitous in spatial data (Couclelis, 2003; Hope & Hunter, 2007). With more data and sophisticated tools available for exploring and analyzing them, additional research is imperative to address and develop ways to advance the understanding of how humans reason and make decisions under spatial uncertainty. While research on uncertainty and decision-making have a long history across several disciplines, recent technological developments producing new open data sources compels researchers to rethink how to best address and understand uncertainty inherent in data and models. One such approach is to use visualization techniques proposed by the geographic visualization and information visualization communities.

When decisions are made from visualized geospatial data without the uncertainty explicitly mentioned or depicted with the dataset, it can lead to an inaccurate or misleading understanding of spatial patterns and processes. Hunter and Goodchild (1993) state that without

42

proper attention to uncertainty, outcomes can result in the “use of wrong data, in the wrong way, to arrive at the wrong decision” (p. 55). Thus, recent efforts have attempted to support the decision-maker through integration of uncertainty in . For a comprehensive overview of research in this area, see Kinkeldey et al. (2015).

The large number of current uncertainty visualization techniques draw mostly upon existing cartographic methods using standard visual variables (e.g., MacEachren, 1992;

MacEachren et al., 2012), however less research focuses on the impact this has on reasoning and decision-making (Kinkeldey et al., 2015). This is largely due to the lack of comprehensive and generalizable empirical studies across the entire domain of uncertainty visualization.

Additionally, while progress has been made, results are scattered across different disciplines

(MacEachren et al., 2005a) and various contexts without enough communication and interdisciplinary work. This lack of comprehensive and generalizable empirical testing may partially be due to the conflicting and numerous definitions of uncertainty (Aerts, Clarke, et al.,

2003; Pang et al., 1997). For instance, Deitrick and Edsall (2008) find that the terminology of uncertainty varies widely, with different terms being used across different disciplines. This disagreement makes it hard to formulate generic theories and techniques across disciplines and domains.

Beyond having mixed terminology among researchers focusing on uncertainty, Aerts,

Clarke, et al. (2003) point out that there is still only a small amount of literature and research addressing perceptual and cognitive questions as well as the effectiveness of visualizing uncertainty among various approaches. In many instances, we may be prematurely attempting to create uncertainty visualizations that may not appropriately take users and their heuristics, biases, experiences, and abilities into account. For example, researchers have identified that heuristics,

43

or experience-based approaches aiding in reasoning, play a key role for reasoning under uncertainty (e.g.,Tversky & Kahneman, 1974).

Heuristics are strategies that people use in order to simplify a difficult judgment or decision such as understanding probabilities through a rule of thumb or common sense. In this special issue, Ruginski et al. used a think aloud exercise to disentangle their results and found several heuristics that users employ in order to reason about potential damage to an oil rig. Some potential issues arising from the use of heuristics and prior experience is that individuals have several types of systematic errors, or biases, that affect our judgment capabilities (Tversky &

Kahneman, 1974).

This presents a unique problem in the case of uncertainty visualization. Since both experts and non-experts apply heuristics and biases when they only have partial or uncertain information, it is also likely that this characteristic of human decision-making will apply when interacting with spatial visualizations like maps containing uncertainty. Thus, uncertainty visualization researchers must understand and face this issue and develop methodologies that will help users overcome these biases and make better-informed decisions8. Additionally, researching how other differences including prior experience, knowledge, and abilities relevant to the context and uncertainty visualization may impact the outcome (including decisions, comprehension, etc.) are important to support the data visualization users.

8 MacEachren (2015) argues that rather than focusing simply on a visualization perspective, visual analytics methodologies can help people better reason and make decisions under uncertainty.

MacEachren, A. M. (2015). Visual analytics and uncertainty: Its not about the data. EuroVis Workshop on Visual Analytics. E. Bertini and J. C. Roberts. Cagliari, Italy.

44

Understanding how and when users deal with uncertainty to assist reasoning and decision-making is of extreme importance in research. To begin with, quantifying uncertainty so that the result is of relevance to the decision maker is a necessary first step. In this special issue,

Salap –Ayca and Jankowski calculate uncertainty to aid in the selection and decision for agricultural lands to be placed in conservation and crop reserve. Aside from quantifying uncertainty, one must also question whether uncertainty should always be presented to the data visualization user. In the empirical work from Aerts, Clarke, et al. (2003), more than 70% of participants agreed that the visualization of uncertainty enhanced their analysis and decisions.

The feedback from participants in their study and another from Leitner and Buttenfield (2000) was mostly positive, where users felt that the incorporation of uncertainty visualization clarified the geospatial data rather than making it more complicated.

Similarly, Bisantz, Marsiglio, and Munch (2005) found that visualizing uncertainty enhanced the decisions of users, where decisions were most impacted during times of greater uncertainty. In this special issue, Riveiro discovered that even with uncertainty, experts reported high levels of confidence and significantly more than the novices. It should be noted, however, that the inclusion of uncertainty (Van Oort & Bregt, 2005) can in some cases decrease user confidence and make the data appear less reliable and unfavorable. Moreover, some research finds that users may explicitly attempt to ignore uncertainty (Hope & Hunter, 2007) because, for example, they are not aware of it, they do not understand it, do not know what to do with it, it makes the data appear less reliable or valid (Slingsby, Dykes, & Wood, 2011), it is too difficult to investigate, or it makes a decision too complicated. Based on these findings, it would appear that successful visualization of uncertainty is highly dependent on the context, task, and individuals or groups interacting with it.

45

This special issue arises from the continuing need and support for more research on

Visually-Supported Spatial Reasoning with Uncertainty. While research on uncertainty visualization to support spatial reasoning and decision-making have been prominent and important topics over the past few decades, calls for papers, workshops, research groups, and grants continue to appear in GIScience. The following research avenues show the prominent role uncertainty continues to play. At the 2016 AAG conference, several sessions have sent out calls for papers to include research on uncertainty and its visualization. A workshop on “visualization for decision making under uncertainty” and several papers on uncertainty visualization were presented at the VIS 2015 conference.

The National Center for Geographic Information and Analysis (NCGIA, 2015) is a consortium that was established in 1988 and mainly funded from the National Science

Foundation with members from the University of California, Santa Barbara, the University at

Buffalo, and the University of Maine. The first area they undertook was accuracy and uncertainty in spatial data and they continue to research this important topic. Another collaborative project

(NCGIA, 2015) is underway between groups from the University of Utah, the University of

Washington, Clemson University, and University of California, Santa Barbara. The National

Institutes of Health has an open funding opportunity for Spatial Uncertainty: Data, Modeling, and Communication to include ways to visualize and communicate spatial uncertainty beginning in 2015. With this continued demand for research on the topic, this special issue is responding to this need and is the logical extension to two successful and well-received workshops some of the editors held at the Conference on Spatial Information Theory in 2013 and GIScience in 2014.

The articles in this special issue present research in the area of visually-supported spatial reasoning with uncertainty. We have used a visual summary graphic developed by Mason,

Retchless, and Klippel (2017) to provide an overview of each paper. This visual summary

46

applies a graphic typology with various domains of uncertainty visualization research. Shaded regions show those domains that each paper employs in their research. This typology has framed uncertainty visualization research as comprising three major domains: User effects, visualization techniques, and stimulus effects.

User effects are characteristics an individual user has which will ultimately affect the way they interact with a visualization of uncertainty. These include individual differences, prior knowledge and experience. Visualization techniques refer to the various ways in which uncertainty can be visualized, organized, and evaluated. This includes the type of data used, intrinsic or extrinsic representations, coincident or adjacent displays, etc. The final domain, stimulus effects, encompasses the various effects that the stimulus, or an uncertainty visualization, can have on the user. For instance, a visualization may impact the decisions a user makes, how they comprehend uncertainty, or elicit some sort of emotional response. There are numerous other sub-domains of which we will discuss as each paper covers them.

The article by Salap-Ayca and Jankowski, the authors explore the uncertainty in land allocation criteria weights from multi-criteria evaluation models to assist in identifying agricultural land that should be placed for land conservation and crop reserve. Upon running

Monte Carlo simulations, they have created maps of average suitability and uncertainty and further ran a sensitivity analysis. The authors employ both global and local methods to ascertain how the local spatial heterogeneity impacts the criteria weights. To visualize the uncertainty of the suitability map, sensitivity maps were created focusing on the average and standard deviation of the weights.

This decision making model offers a look into the uncertainty for each of the watersheds in Southwest Michigan showing both the average suitability and the standard deviation (or uncertainty) in each. Combinations of high and low suitability with high and low standard

47

deviation shows how the uncertainty in the suitable areas can affect final decisions. The approach taken by the authors offer alternative options to support decision makers and providing them with multiple scenarios and their associated uncertainty. Figure 2.1 presents the visual summary of the aforementioned article. This article presents visualization techniques utilizing the polygon and field (raster) data, a coincident display method, and intrinsically visualizes uncertainty.

Figure 2.1. Visual Summary of Salap-Ayca and Jankowski (2016)

Ruginski et al. compare how different visualizations of the uncertainty for a hurricane track impact decision-making for non-experts in a controlled experiment. The five visualizations include: the traditional “cone of uncertainty” map as often presented by the National Hurricane

48

Center, a cone without the center track line, a center line without the outer cone, a fuzzy- boundary without the center track, and an ensemble of potential tracks. Other factors included varying the hurricane, temporal points, and oil rig locations. The study then asked participants to estimate damage to the oil rig and in a later task follow a think aloud protocol and discuss their reasoning and decision-making strategies which were coded into various heuristics. The findings of the think aloud protocol yielded a high inter-rater reliability for Cohen’s Kappa, revealing that these participants used similar heuristics when reasoning and making decisions about the hurricane.

Figure 2.2 reveals the different topics that the presented research covers: intrinsic and extrinsic visualization methods, line and polygon uncertainty data, coincident display method, and an evaluation of the visualization techniques that implicitly ascertain whether users comprehend the various visual components presented on the maps. This research contributes both quantitative and qualitative methods to better understand how reasoning and decision- making heuristics interplay while interacting with various visualizations and with different factor conditions (i.e. the hurricane forecast, temporal points, oil rig locations). The authors have importantly focused on non-experts, the major consumers of these visual products who are potentially affected by hurricanes and have to make decisions from a variety of sources, including maps with uncertainty like those presented in the study, to ultimately make decisions about their well-being and property.

49

Figure 2.2. Visual Summary of Ruginski et al. (2016)

The study by Riveiro focuses on user expertise to assess the threat of targets in an air traffic control simulation. Both sets of participants (all military officers) have some training in this area, however their domain expertise varies in the length of time they had practical experience (either a maximum of 3 years for novice or more than 10 years for expert in air surveillance and risk assessment). Each participant was tasked with protecting a radar station from various targets and upon using an interactive system and map, look at various information

(i.e. altitude, distance, speed, etc.) and their uncertainties to make decisions about their potential threat and the priority of each (low, medium, and high) to be sent to the next in command.

Overall, experts have more confidence with the additional uncertainty information than the novice users and performed better at correctly determining targets in the simulations.

50

Figure 2.3 shows the comprehensive nature of this research in the uncertainty visualization field. As reflected in the visual summary, Riveiro evaluates how context

(background related to the mapping context. i.e. novice and expert domain expertise) affects decisions and comprehension of intrinsic and extrinsic uncertainty in an animated and interactive coincident display of both points and lines. By including the entire user effects, visualization techniques, and stimulus effects, Riverio obtains a comprehensive picture of the entire process of a user interacting with a visual display of uncertainty.

Figure 2.3. Visual Summary of Riveiro (2016)

51

Outlook

Upon viewing these visual summaries, the reader will find that each of these articles covers different topics in the uncertainty visualization domain, offering complementary research in this field. Understanding this uncertainty and its impact on users is a puzzle that we are now actively trying to understand. Uncertainty visualization will continue to be an important field of research as uncertainty plays an increasing role in data analysis and practical human decisions with the increasing amount of data and combination of various data sources. With the large and expanding utilization of geospatial data and its visualization, a largely ignored component in many analyses and visualizations is the uncertainty interwoven throughout the data. In many cases, this may cause user misinterpretation and poor reasoning and decision-making behaviors because users do not fully grasp the complexity of the different uncertainties arising from data collection, manipulation, analyses, visualizations and our human cognitive capacities and biases.

Decision-making under uncertainty is a process that many users among numerous domains must face. Visualization of uncertainty for geospatial data is a promising mode for presenting this attribute to support these various researchers and people who make decisions and reason about their data. The large knowledge gap in this area is the application and extension of research on individual differences, prior experience, and conceptualization of uncertainty in other research areas and how they might apply in designing effective uncertainty visualizations to support reasoning and decision-making. Furthermore, extending this body of research and finding new ways to explore how these visualizations may help or hinder the analytical and reasoning process of humans continues to be a necessary step towards better knowledge and decisions taking into account all available evidence.

52

References

Aerts, J. C. J. H., Clarke, K., & Keuper, A. (2003). Testing Popular Visualization Techniques for

Representing Model Uncertainty. Cartography and Geographic Information Science,

30(3), 249-261.

Bisantz, A. M., Marsiglio, S. S., & Munch, J. (2005). Displaying Uncertainty: Investigating the

Effects of Display Format and Specificity. Human Factors: The Journal of the Human

Factors and Ergonomics Society, 47(4), 777-796.

Couclelis, H. (2003). The certainty of uncertainty: GIS and the limits of geographic knowledge.

Transactions in GIS, 7(2), 165-175.

Deitrick, S., & Edsall, R. (2008). Making Uncertainty Usable: Approaches for Visualizing

Uncertainty Information, in Geographic Visualization: Concepts, Tools and Applications.

In M. Dodge, M. McDerby, & M. Turner (Eds.), Geographic Visualization: Concepts,

Tools and Applications (pp. 277-291). Chichester, UK.: John Wiley & Sons, Ltd.

Hope, S., & Hunter, G. J. (2007). Testing the effects of positional uncertainty on spatial

decision‐making. International Journal of Geographical Information Science, 21(6),

645-665.

Hunter, G. J., & Goodchild, M. F. (1993). Managing Uncertainty in Spatial Databases: Putting

Theory into Practice. Journal of Urban and Regional Information Systems Association,

5(2), 52-62.

Kinkeldey, C., MacEachren, A. M., Riveiro, M., & Schiewe, J. (2015). Evaluating the effect of

visually represented geodata uncertainty on decision-making: systematic review, lessons

learned, and recommendations. Cartography and Geographic Information Science, 1-21.

53

Leitner, M., & Buttenfield, B. P. (2000). Guidelines for the Display of Attribute Certainty.

Cartography and Geographic Information Science, 27(1), 3-14.

MacEachren, A. M., Robinson, A., Hopper, S., Gardner, S., Murray, R., Gahegan, M., & Hetzler,

E. (2005). Visualizing Geospatial Information Uncertainty: What We Know and What

We Need to Know. Cartography and Geographic Information Science, 32(3), 139-160.

MacEachren, A. M., Roth, R. E., O'Brien, J., Li, B., Swingley, D., & Gahegan, M. (2012). Visual

Semiotics & Uncertainty Visualization: An Empirical Study. Visualization and

Computer Graphics, IEEE Transactions on, 18(12), 2496-2505.

Mason, J. S., Retchless, D., & Klippel, A. (in revision). Domains of Uncertainty Visualization

Research: A Visual Summary Approach. Cartography and Geographic Information

Science.

Modeling, Display, and Understanding Uncertainty in Simulations for Policy Decision Making.

(2015). Retrieved from http://visunc.sci.utah.edu

NCGIA. (2015). NCGIA Overview. Retrieved from

http://www.ncgia.ucsb.edu/about/overview.php

Pang, A. T., Wittenbrink, C. M., & Lodha, S. K. (1997). Approaches to Uncertainty

Visualization. The Visual Computer, 13(8), 370-390.

Slingsby, A., Dykes, J., & Wood, J. (2011). Exploring Uncertainty in Geodemographics with

Interactive Graphics. IEEE Transactions on Visualization and Computer Graphics,

17(12), 2545-2554.

Tversky, A., & Kahneman, D. (1974). Judgment Under Uncertainty: Heuristics and Biases.

Science, 185(4157), 1124-1131.

Van Oort, P., & Bregt, A. (2005). Do Users Ignore Spatial Data Quality? A Decision‐Theoretic

Perspective. Risk Analysis, 25(6), 1599-1610.

54

Chapter 3

The third paper provides a more substantial contribution in terms of the research undertaken. The case study presented examines how people make evacuation decisions under uncertainty (i.e. probability) for an approaching hurricane after showing maps of potential flooding modeled after the National Hurricane Center storm surge flood maps released in 2014.

The research looks at both individual differences (user effects from the first paper) and how the flood maps impact the evacuation decisions users make. Employing the visual summary developed in the first article, the following figure (see Figure 3.1) shows the various domains covered in the subsequent article. This study delivers a more holistic research approach by including aspects from all three major domains outlined in the visual summary.

Figure 3. Visual summary graphic showing the domains covered in the third article of the dissertation.

55

Visualizing Storm Surge: A Holistic Approach for Assessing Factors in

Uncertain Storm Surge Evacuation Decisions

In 2014, new storm surge flood maps were released by the National Hurricane Center. With widespread dissemination of National Hurricane Center graphics and data, it is important to evaluate the effect that the new storm surge flood maps may have on risk perception and behavioral intention. Furthermore, it is equally important to look into how individual differences, such as prior experience and abilities (e.g., numeracy), may impact the way in which the public interacts with the visualizations. The research in this paper looked at various factors including individual differences and different map characteristics to identify their relationship in making evacuation decisions. This was conducted through an iterative process in which results from the first study informed the two subsequent studies to compose a more synergistic and comprehensive understanding of decision-making under uncertainty for an approaching hurricane. Study one showed that having more certainty in flooding correlated to higher evacuation rates with marginal significance. Participants also paid attention to the flood height category, stating they would choose to evacuate most in the highest flood height zone closest to the ocean and least in the lowest blue flood height zone found farthest from the ocean. Evacuation rates were also higher overall in a mild flood scenario over a more severe scenario. This led to exploring the maps further and finding that by using real data, the mild flood scenario a lower flood height zone was adjacent to the ocean instead of the highest flood height zone, potentially influencing the results. In study two, the maps in the mild flood scenario were redrawn to close the gap in the highest flood zone so that only this zone occurred directly adjacent to the ocean. The data showed that participants again evacuate more in the higher flood zones and mild flood scenarios. Study three attempted to disentangle how distance to the ocean and the flood height zones impacted decisions. The results revealed that participants chose to evacuate more at locations closer to the flood source (i.e. ocean), and once farther from the source, they use flood height as a strategy for choosing when to evacuate.

Keywords: uncertainty visualization, decision-making, flood, individual differences

56

Introduction

Uncertainty arises the moment we attempt to measure and observe the real world. Pang et al. (1997) identify three phases of uncertainty introduction: data acquisition through models or measurements, data transformation (e.g. rescale, resample, etc.), and through visualization that further abstracts the original dataset. While these uncertainties are inherent in geospatial data,

Hirschberg et al. (2011) note that both the National Research Council and the American

Meteorological Society identified uncertainty as a crucial component for hydrometeorological forecasts. Identifying these uncertainties and communicating them to users is an important focal point of a large body of research. Hurricane graphics are one mode of communication that offers the public access to various uncertainties (e.g., uncertainty of wind speeds or the central track) for different upcoming points in time. Several studies have evaluated current hurricane visualizations and found common misinterpretations among the general public. One of the focal graphics for hurricane forecasts is the track forecast cone, or as the public and media coined it, the cone of uncertainty (Broad, Leiserowitz, Weinkle, & Steketee, 2007). Per the National

Weather Service (2013), the cone visualizes the likely central track of a hurricane (not the actual extent or magnitude). This region combines areas of different circles at different temporal intervals whose radii represent two-thirds of forecast errors over the past five years. During

Hurricane Wilma in 2005, peak traffic reached 1.1 billion hits a day on the National Hurricane

Center website (Rappaport et al., 2009). With such popularity, it is essential forecasters evaluate different graphics and that the information they provide is communicated and received accurately. Many users, especially the lay public, misinterpret the cone of uncertainty to represent the actual size of the hurricane (Broad et al., 2007) and misunderstand the variance of the seemingly certain solid black track line (Orlove, Broad, & Meyer, 2010; Stephens et al.,

2012a). Broad et al. (2007) describe the destruction caused in part by misinterpretation of the

57

cone of uncertainty for Hurricane Charley in 2004. While residents of Charlotte County were clearly in the cone of uncertainty for four days, many people focused too heavily on the forecasted solid black track line that did not cross over their town and assumed they were safe.

Criticism on the graphic focused heavily on its influence of the public’s confidence of the solid black line. “Ironically, a graphic intended to convey uncertainty may have had the opposite effect” (p. 665).

With widespread dissemination of National Hurricane Center graphics and data, it’s important to evaluate the effect that these visualizations may have on risk perception and behavioral intention. Furthermore, it is equally important to look into how individual differences, such as prior experience and abilities (e.g. numeracy), may impact the way in which the public interacts with these graphics. In 2014, the National Hurricane Center (NHC) disseminated new storm surge flood maps (Figure 3.1) to the public that should be evaluated given the impact they may have on the general public. Recently, the NHC revised the maps to add levee areas, intertidal zones or wetlands, and to change the flood range for the lowest level to include greater than 1 foot above ground up to 3 feet rather than simply up to 3 feet. This research evaluates the former map design, and future evaluations should also focus on how changing the range of the lowest flood height category may impact decision-making.

58

Figure 3.1. Template of the National Hurricane Center flood map released in 2014

The maps were released as an experimental product “to show the extent and depth of possible storm surge flooding for a given storm. It represents a reasonable estimate of worst-case scenario flooding of normally dry land at particular locations due to storm surge.” (National Hurricane

Center, 2016, p. 1) These maps are especially important to assess because “storm surge is often the greatest threat to life and property from a hurricane.” (National Hurricane Center, 2014, p. 1).

In the past, low category hurricanes have often led to catastrophic damage with regards to property and loss of life. With this knowledge, this work undertook an evaluation to identify how individual differences of users affect risk belief and behavioral intention upon interaction with the National Hurricane Center storm surge GIS data, replicating the appearance of their maps.

59

Beyond the map itself, prior experience and abilities of the user is often overlooked in uncertainty visualization research in the spatial sciences. Several researchers have recognized that heuristics help people reason under uncertainty (Doswell III, 2004; Kahneman, Slovic, &

Tversky, 1982; Tversky & Kahneman, 1973). Heuristics are experience-based strategies that assist in making often-difficult judgments or decisions. These heuristics can often lead to systematic errors or biases that impact how we make these judgments. Kahneman et al. (1982) discuss one heuristic relevant for processing uncertain information with regards to hurricane maps: the availability heuristic. Availability refers to people who subjectively evaluate and determine probabilities of events based on how easy it is to recall similar or prior instances, which can be further strengthened through repetition of exposure to these instances (Tversky &

Kahneman, 1973). Thus, people tend to overestimate the probability of events where similar or prior instances are easier to recall than those that are not, regardless of their actual likelihoods and probabilities. In terms of hurricanes, these heuristics can have devastating consequences for people who, for instance, falsely underestimate the likelihood of a hurricane because they cannot easily recall a previous storm of a similar nature. With the rising complexity of choices under uncertainty, people often rely on heuristics to make sense of uncertain situations, and it is vital to understand the impact they may have when lives are at risk.

Similar to the availability heuristic, prior experience may also weigh heavily in the choices people make for future hazards. Keller, Siegrist, and Gutscher (2006) cite research by

Weinstein (1989) that found prior experience plays a role in the perception of a hazard.

Furthermore, Keller et al. (2006) also cite numerous other research finding an influence of prior experience with earthquakes on the adoption of hazard precautions (Jackson, 1981), the purchase of flood insurance due to previous damages (Baumann & Sims, 1978; Zaleskiewicz, Piskorz, &

Borkowska, 2002), recency of floods leading to a higher perceived risk of future threats

60

(O'Connor, Yarnal, Dow, Jocoy, & Carbone, 2005), and experience with floods as a predictor of perceived risk (Siegrist & Gutscher, 2006). However, disentangling positive and negative prior experiences will certainly be important in associating its potential impact on future behaviors of individuals.

Beyond experience, individual skills such as the capability to work with and comprehend numbers, has a strong potential to impact how individuals may understand and reason about uncertain situations tied to, for example, numerical forecasts (e.g., probability of rainfall). The

Subjective Numeracy Scale, one of multiple ways to gauge numeracy skills, is an 8-item self- assessment of one’s ability and comfort working with numbers (Fagerlin et al., 2007). This scale was validated by Fagerlin et al. (2007), finding a correlation between one’s subjective assessment and test measures for objective numeracy. Research by Zikmund-Fisher et al. (2008) found that “stronger perceived numeracy was related to weaker (and ultimately more accurate) risk beliefs.” (Severtson & Myers, 2013)

The final individual difference to be measured in this study is gender. Severtson and

Myers (2013) discuss gender differences in risk perception and cite research by Slovic (1999) that found females tend judge risk higher than males.

Previous Research

With the release of the National Hurricane Center (NHC) storm surge flood maps in 2014 and the large number of people who utilize these graphics during a hurricane, evaluating their potential impact on evacuation decisions is a priority. Sherman-Morris, Antonelli, and Williams

(2015) analyzed the effectiveness of different colors and legend values for the storm surge maps for the NHC. Despite no statistically significant results for the legend conditions, eye-tracking revealed that participants had more fixations on the legend when feet were utilized (i.e. <3, 3-6,

61

6-9, 9-12+) over categorical text (i.e. low, med, high, extreme), suggesting it may be more difficult to interpret, though it doesn’t impact accuracy. With questions on perceived risk, perceived helpfulness, and accuracy, the researchers found that a sequential blue color scheme was hardest to decipher and a green to red map condition a preference among participants. The green to red color scheme was also perceived to be most helpful and led to somewhat higher levels of accuracy though not reaching significant levels over the other color schemes. It is important to note, however, that there are many people who are red-green colorblind and may find it difficult or even impossible to distinguish some of the colors on the map. This could lead to significant populations making decisions based on false interpretations of the maps. The NHC may have settled on the final color scheme shown in Figure 3.1 based on this knowledge. The study by Sherman-Morris et al. (2015) was a first step towards evaluating the NHC maps however it did not attempt to understand how map users make evacuation decisions.

The research in this paper uses three sequential studies to evaluate how both individual differences as well as various map characteristics may impact users’ evacuation decisions when utilizing storm surge flood maps from the NHC GIS data and map style. The first study measures various individual differences to ascertain whether they impact participants’ decisions to evacuate. It also explores various changes and visual properties of the map (i.e. severity of the flooding and flood height category) as well as the probability of flooding to identify their effect on evacuation decisions. Based on the findings from the first study, the second study employs the same set up with small alterations to the map to rule out if the differences have any bearing on the decisions participants make. The final study tries to disentangle the findings from the first two studies and examine how and if distance and flood height categories may play a role in evacuation decisions.

62

Study 1

While Sherman-Morris et al. (2015) have evaluated the color schemes for the NHC flood maps, this research attempts to more holistically understand decision-making under uncertainty by looking at both individual differences of the map users as well as the evacuation decisions people make when interacting with the recently-released NHC hurricane flood maps in an experimental scenario. More specifically, how do users from states affected by hurricanes

(inclusive of the eastern and southern coastal states in the United States) respond to storm surge flood maps for an approaching hurricane? The following research questions and their respective hypotheses drive the setup and analyses:

1. Does flood height impact evacuation decisions of participants?

It is hypothesized that at higher flood heights, people are more likely to choose to evacuate due to the severity of the scenario.

2. Are there differences in decisions made by each gender?

Based on the aforementioned research by Severtson and Myers (2013), females will evacuate sooner than males because they tend to judge risk higher. Thus, females will evacuate at lower flood probabilities (i.e. 10% over 30%) and lower flood heights than males.

3. Does a higher subjective numeracy scale correlate to decisions participants make?

The higher the subjective numeracy of a participant, the longer a participant is expected to wait to evacuate. This is based on the research reported by Severtson and Myers (2013) that identified that “stronger perceived numeracy was related to weaker (and ultimately more accurate) risk beliefs” (p. 5). This is anticipated in both higher evacuation thresholds for flood probability as well as the flood height.

4. How does a higher availability heuristic correlate to decision-making?

Tversky and Kahneman (1973) point out that people can overestimate the likelihood of events,

63

specifically their probability, when they can more readily recall similar or prior instances of these events. Based on this availability heuristic, it is hypothesized that participants will evacuate at lower flood heights and lower flood probabilities the higher their measured availability heuristic. This is due to their overestimation in the likelihood of the flooding to occur.

5. How do users respond to a mild versus a more severe flooding scenario?

When presented with a more severe flood scenario, it is expected that participants will evacuate more than in mild flood scenarios.

6. How does the probability of flooding (10% versus 30%) impact evacuation decisions?

With more certainty (i.e. higher probability of flooding) participants are more likely to evacuate.

Methods

Materials and Design

As a basis for the storm surge flood maps I chose point A as the location where participants made decisions on whether or not to evacuate. Figure 3.2 shows an example of the map.

64

Figure 3.2. Example of storm surge flood map with point A as the location for potential evacuation.

In an effort to control other possible external factors from influencing user decisions, the highways and roads have been removed from the original maps as well as a simplification

(smoothing) of the complex coastlines. These steps help to prevent potential biases including removing any familiarity of the storm surge location. The simplification and modifications anonymize the maps sufficiently, that is, no participant reported the identity of the maps.

The study is divided into two different surveys with participants randomly assigned to one of two survey conditions: one containing maps with a 10% probability of flooding and the other 30%. These two probabilities were chosen due to their high degree of uncertainty in order to explore decision-making under very uncertain conditions. Aside from having a different probability of flooding, the maps were otherwise identical.

In total there are 96 maps, resulting in 96 evacuation decisions each user undertakes within the mixed design study. The maps are altered to cover each of the following factors:

65

severity of the flood (mild versus severe) and flood heights (blue, yellow, orange, or red). This results in a 2 X 2 X 4 design with 2 factors between participants (10% versus 30% probability) and a 2 X 4 within participants design for each probability survey. The following additional modifications are made for each map to ensure experimental repetition and avoid potential bias: coastline orientation (vertical versus horizontal) and including six randomly placed points in each flood range (i.e. color), one per question. All maps are identical between the two surveys with only a different probability - 10% and 30% respectively. The maps also replicate the colors, placement, and text of the original NHC storm surge flood maps released in 2014. The following figures show the difference in severity (Figure 3.3), flood heights (Figure 3.4), coastline orientation (Figure 3.5), and example of the placement of 6 points within one flood height

(Figure 3.6).

Figure 3.3. Two maps showing the difference in severity of the flood (left: mild,

right: severe).

66

Figure 3.4. Four maps displaying the placement of the location within each of the four flood heights. Blue: up to 3 feet above ground, yellow: greater than 3 feet above ground, orange: greater than 6 feet above ground, and red: greater than 9 feet above ground.

Figure 3.5. Two maps revealing the different coastline orientations (left: horizontal, right: vertical)

67

Figure 3.6. Six maps indicating the random placement of 6 points in one flood height condition.

Procedure

Each survey consisted of the same questions and format: a demographic section, subjective numeracy scale questions, prior experience with hurricanes and evacuation, questions to measure the availability heuristic of each participant, a map reading section to ascertain that participants can both read the maps correctly and identify the correct probability of the maps, and the evacuation decision questions for each map rendition. For each of the 96 maps, participants answered the following question, randomly ordered, one per page:

If you lived in a single level house or on the ground level of an apartment building at point A, would you evacuate the area to a safer location?

Participants

Participants were gathered from Amazon Mechanical Turk (AMT) and limited to people who had approval ratings from previous tasks greater than or equal to 95% to ascertain higher quality responses. Once a participant accepted the task (also known as a Human Intelligence

68

Task, or HIT), they were redirected to the qualtrics survey, and randomly placed in the 10% or

30% survey condition. Each participant then began the survey or was notified that they are not allowed to participate if they were not located in an eastern or southern coastal state impacted by hurricanes. Once completed, each person was paid $2.00, with an average of 19 minutes to complete the survey with a 10% probability and 18 minutes to complete the survey with a 30% probability. In total, there were 40 participants for the 10% survey and 40 participants in the 30% survey. Upon inspecting the map reading questions of each survey, two participants were omitted, one from each survey. One participant could not answer the correct probability of flooding and which flood height color was higher, and another participant responded yes to every question in the evacuation section, rushing through the survey. This resulted in 39 participants for the 10% survey (24 males and 15 females) and 39 participants for the 30% survey (14 males and 25 females). The average age of the participants was 36 (ranging from 22 to 61 years old) for the 10% survey and 35 (ranging from 19 to 66 years old) for the 30% survey.

Results

10% Probability Survey

The following table outlines the different evacuation decisions for each flood height color in the mild and severe flood scenarios as well as overall.

Table 3.1. Evacuation results for different flood heights with a 10% probability. Each value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. Evacuated Didn’t Evacuate (% Yes) (% No) Mild Blue 51 (10.90%) 417 (89.10%)

69

Severe Blue 47 (10.04%) 421 (89.96%) All Blue 98 (10.47%) 838 (89.53%) Mild Yellow 247 (52.78%) 221 (47.22%) Severe Yellow 224 (47.86%) 244 (52.14%) All Yellow 471 (50.32%) 465 (49.68%) Mild Orange 439 (93.80%) 29 (6.20%) Severe Orange 376 (80.34%) 92 (19.66%) All Orange 815 (87.07%) 121 (12.93%) Mild Red 446 (95.30%) 22 (4.70%) Severe Red 455 (97.22%) 13 (2.78%) All Red 901 (96.26%) 35 (3.74%)

Figure 3.7. Bar chart of the percent evacuated for each flood zone and severity (mild versus severe) for the 10% survey.

Table 3.1 and Figure 3.7 both reveal that evacuation rates increase from the lowest to the highest flood heights respectively in the 10% survey, consistent with the original hypothesis.

Interestingly, more participants chose to evacuate in the mild hurricane scenarios over the severe

70

scenarios in all flood zones except the highest flood zone, red, with greater than nine feet of flooding. This will be revisited in the discussion section for further exploration. An independent samples t-test comparing male responses (N = 24, M = 0.63, SD = 0.48) to females (N = 15, M =

0.58, SD = 0.49) shows no statistically significant difference for evacuation decisions (t(26) =

0.73, p = 0.47, d = 0.10), rejecting the original hypothesis that females will judge risk higher than males and subsequently be more likely to evacuate more than males. Conversely, the maps may not have conveyed risk well enough for there to be a difference in gender.

Serving as a proxy for the lengthy objective measure of numerical skills, the subjective numeracy scale (SNS), was used to measure perceived numeracy skills. This 8-item test was found correlated to actual numeracy tests by Fagerlin et al. (2007), and ranges from the possible score of 1 to 8 points per question. The average score of the subjective numeracy scale test indicates the individual perceived ability to comprehend and work with numbers, with higher scores indicating higher numerical abilities. In the 10% survey, the SNS scores ranged from

3.125 to 6 (M=4.72, SD=0.70). The SNS can be further divided into numeracy abilities (or

“ability to interpret numerical information” (Fagerlin et al., 2007, p. 4)) and numeracy preferences “for the presentation of numerical information” (Fagerlin et al., 2007, p. 4). For the four questions measuring numeracy ability, the values ranged from 2.25 to 6 (M=4.49,

SD=1.03). Numeracy preference resulted in slightly higher values of 3.25 to 6 (M=4.9, SD=0.3).

Pearson’s correlation between the overall SNS scores and choosing to evacuate show no correlation (r= -0.1, N= 39, p=0.5), rejecting the original hypothesis that participants with higher numeracy skills will have higher evacuation thresholds for flood probability as well as the flood height.

In order to measure the availability heuristic, participants were given 1 minute to type in as many details (ex. heavy winds, damage to something, etc.) as they could remember about the

71

most recent hurricane they experienced with flooding. Each detail they wrote counted as a single response resulting in a range of 3 to 11 responses among the participants. Contrary to the hypothesis, a higher availability heuristic does not appear to influence their perceived probability of an event occurring, as there is no correlation between the availability heuristic and evacuation decisions (r = -0.1, N=10, p = 0.8).

30% Probability Survey

Table 3.2 reveals the evacuation results among the different flood heights in the mild and severe scenarios as well as overall.

Table 3.2. Evacuation results for different flood heights with a 30% probability. Each value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. Evacuated Didn’t Evacuate (% Yes) (% No) Mild Blue 94 (20.09%) 374 (79.91%) Severe Blue 112 (23.93%) 356 (76.07%) All Blue 206 (22.01%) 730 (77.99%) Mild Yellow 289 (61.75%) 179 (38.25%) Severe Yellow 284 (60.68%) 184 (39.32%) All Yellow 573 (61.22%) 363 (38.78%) Mild Orange 439 (93.80%) 29 (6.20%) Severe Orange 417 (89.10%) 51 (10.90%) All Orange 856 (91.45%) 80 (8.55%) Mild Red 463 (98.93%) 5 (1.07%) Severe Red 459 (98.08%) 9 (1.92%) All Red 922 (98.50%) 14 (1.50%)

72

Figure 3.8. Bar chart of the percent evacuated for each flood zone and severity (mild vs. severe) for 30% survey.

Upon inspecting both Table 3.2 and Figure 3.8, participants chose to evacuate at higher rates as the flood heights increased. Additionally, rates were again higher in three of the four mild scenarios for all zones except the lowest blue zone (up to three feet of flooding).

Like the 10% survey, an independent samples t-test showed there is no difference between female (N = 25, M = 0.70, SD = 0.46) and male (N = 14, M = 0.66, SD = 0.48) evacuation decisions (t(28) = -0.68, p = 0.50, d = 0.09), rejecting the hypothesis that females will evacuate more than males. The subjective numeracy scale results ranged from 2.25 to 5.75 of a possible range of 1 to 8 (M = 4.61, SD = 0.77). There is no correlation between a participant’s subjective numeracy scale and their decision to evacuate, N = 39, r = 0.07, p = 0.7, again rejecting the hypothesis that participants with higher numeracy skills will have higher evacuation thresholds for flood probability as well as the flood height.

73

Among the 12 people with hurricane experience, their measured availability heuristic is marginally significant in a positive correlation to their decisions to evacuate (N = 12, r = 0.56, p

= 0.06). Perhaps those with a higher availability heuristic are more willing to evacuate once they reach slightly higher levels of certainty (i.e. 30%) as 10% is a very low probability for making such definite decisions. This low probability may have also reinforced that there is little risk from a storm if they also experienced and remember a previous storm that also had a low probability and did not impact them.

Comparing Both Surveys

An independent samples t-test comparing the 10% survey (N = 39, M = 0.61, SD = 0.17) to the 30% survey (N = 39, M = 0.68, SD = 0.19) reveals a marginally significant difference that participants are more likely to evacuate at higher probabilities; (t(75) = -1.79, p = 0.08, d = -

0.39). Perhaps with a wider gap between probabilities, participants would evacuate more due to the increasing certainty of the flood heights. A paired samples t-test was run comparing the mild

(N = 78, M = 0.66, SD = 0.17) and severe (N = 78, M = 0.63, SD = 0.20) hurricane scenarios, showing a statistically significant difference, however contradictory to the original hypothesis;

(t(77) = 2.83, p = 0.01, d = 0.41).

With most of the original hypotheses rejected, this interesting finding warrants further analysis to explore why participants would want to evacuate in a milder flood scenario. Upon inspecting the graphics, the mild scenarios do not have a band of red, the highest flood height of

9+ feet, continuous along the coast as a “buffer” between the ocean and the next flood height of

6-9 feet (orange) as shown in Figure 3.9.

74

Figure 3.9. The mild flood scenario graphic on the left shows the discontinuous band of red, the highest flood height of 9+ feet. The severe scenario on the right displays a red flood height continuous along the coastal area.

This break in the highest flood height results in the orange flood height “touching” the ocean. A paired samples t-test reveals that there is a statistically significant difference in evacuation (in percent) between the orange flood heights in the mild scenario (N = 78, M = 0.94, SD = 0.17) and the severe scenario (N = 78, M = 0.85, SD = 0.26); (t(77) = 3.997, p = 0.01, d = 0.49).

Study 2

Reflecting on the results from the first study, why wouldn’t someone want to evacuate in a more severe hurricane flood scenario? It is possible that participants don’t take into account the entire context such as the strength of the hurricane as a whole, instead focusing on their specific location. This second study endeavors to disentangle the potential factors driving these evacuation decisions, specifically focusing on the possibility of bias due to the artifact of the orange flood height touching (and thereby being closer to) the ocean in the mild scenario.

Systematically examining these potential factors in a series of studies allows us to know what characteristics of the maps influence decision-making and eventually anticipate how people may respond in real-life scenarios of a similar nature.

75

Methods

Materials and Design

The exact setup as the first study was utilized for the second, substituting updated graphics for the mild scenario maps. The red flood height category was redrawn in these maps to cover the entire expanse of the coast, serving as a sort of “buffer” between the ocean and the orange flood height. See Figure 3.10 below.

Figure 3.10. Old (left) and newly redrawn flood maps (right), enclosing the red zone to serve as a single continuous band along the ocean and buffer between the ocean and orange flood zone.

76

Participants

Akin to study one, participants were recruited from Amazon Mechanical Turk (AMT) with the same qualifications and limitations on location. Each person was again paid $2.00, and any participants who responded to the first study were removed from further analysis. On average, it took 15 minutes to complete the 10% survey and 13 minutes to complete the 30% survey. In total, there were 50 participants for the 10% survey and 45 participants in the 30% survey. Upon inspecting the map reading questions of each survey, nine participants were omitted for the 10% survey and five participants from the 30% survey. Some participants could not answer which flood color was associated with a specific range (4 from 10% survey; 2 from

30% survey), which flood color was higher (1 from 10% survey), which flood color had the highest flood height (2 from 10% survey), what the flooding would be at a specific point (1 from

10% survey; 2 from 30% survey), some who said yes to every question (1 in each survey) in the evacuation section, and a handful whom said yes to all evacuation questions but 1 or 2 (3 from

10% survey; 2 from 30% survey). After omitting these participants due to a potential bias in their data, this resulted in 41 participants for the 10% survey (22 males and 19 females) and 40 participants for the 30% survey (21 males and 19 females). The average age of the participants was 34 (ranging from 20 to 59 years old) for the 10% survey and 32 (ranging from 20 to 54 years old) for the 30% survey.

Results

10% Probability Survey

The table below shows the participants’ evacuation decisions for the four flood height zones in both the mild and severe flood scenarios as well as overall.

77

Table 3.3. Evacuation results for different flood heights with a 10% probability. Each value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. Evacuated Didn’t Evacuate (% Yes) (% No) Mild Blue 65 (13.21%) 427 (86.79%) Severe Blue 75 (15.24%) 417 (84.76%) All Blue 140 (14.23%) 844 (85.77%) Mild Yellow 300 (60.98%) 192 (39.02%) Severe Yellow 284 (57.72%) 208 (42.28%) All Yellow 584 (59.35%) 400 (40.65%) Mild Orange 466 (94.72%) 26 (5.28%) Severe Orange 429 (87.20%) 63 (12.80%) All Orange 895 (90.96%) 89 (9.04%) Mild Red 482 (97.97%) 10 (2.03%) Severe Red 480 (97.56%) 12 (2.44%) All Red 962 (97.76%) 22 (2.24%)

78

Figure 3.11. Bar chart of the percent evacuated for each flood zone and severity (mild versus severe) for the 10% survey.

Both Table 3.3 and Figure 3.11 show similar results and patterns as the first study: evacuation rates increase moving to higher flood heights. Even with the updated maps, users are still evacuating more in the mild orange flood height over the severe orange flood height. In fact, in every flood height except the blue, farthest and lowest flood heights, users are evacuating more in the mild scenarios over the severe scenario.

With regards to gender, an independent samples t-test found again that there is no statistically significant difference between female (N = 19, M = 0.687, SD = 0.15) and male (N =

22, M = 0.63, SD = 0.17) responses for evacuation decisions in percent (t(38.98) = -1.17, p =

0.25, d = 0.38). The results between subjective numeracy skills ranged from a value of 2.125 to 6 of a possible range of 1 to 8 and a mean of 4.85. As with the first study, there was no correlation between the SNS and evacuation responses (N = 41, r = 0.11, p = 0.5). Finally, only 8 of the 41 participants had previous hurricane experience, among which the availability heuristic ranged

79

from 5 to 11 instances remembered, with a mean of 7.5. No correlation was found between the measure availability heuristics and the evacuation responses (N = 8, r = -0.36, p = 0.39).

30% Probability Survey

Similar to the 10% survey, Table 3.4 shows that participants are more likely to evacuate in the higher flood heights. As found in all other cases, overall participants are still more likely to evacuate in the mild scenario over the severe scenario (see Figure 3.12) within the same flood height categories.

Table 3.4. Evacuation results for different flood heights with a 30% probability. Each value indicates the number of times the participants chose to evacuate or not with their respective percentages overall among each scenario, both mild and severe as well as overall. Evacuated Didn’t Evacuate (% (% Yes) No) Mild Blue 72 (15.00%) 408 (85.00%) Severe Blue 65 (13.54%) 415 (86.46%) All Blue 137 (14.27%) 823 (85.73%) Mild Yellow 277 (57.71%) 203 (42.29%) Severe Yellow 256 (53.33%) 224 (46.67%) All Yellow 533 (55.52%) 427 (44.48%) Mild Orange 440 (91.67%) 40 (8.33%) Severe Orange 418 (87.08%) 62 (12.92%) All Orange 858 (89.38%) 102 (10.63%) Mild Red 462 (96.25%) 18 (3.75%) Severe Red 471 (98.13%) 9 (1.88%) All Red 933 (97.19%) 27 (2.81%)

80

Figure 3.12. Bar chart of the percent evacuated for each flood zone and severity (mild vs. severe) for 30% survey.

When comparing gender, as with the 10% survey, there is no statistically significant difference found in the independent samples t-test between female (N = 19, M = 0.65, SD = 0.2) and male

(N = 21, M = 0.63, SD = 0.17) participant responses in percent (t(35.44) = -0.38, p = 0.71, d =

0.11). The subjective numeracy scale shows no correlation with decisions (N – 40, r = 0.14, p =

0.39), ranging from SNS values of 2.75 to 6 of a possible 1 to 8 and an average score of 5.

Participants with hurricane flood experience had availability heuristic scores from 5 to 12 and a mean of 8.2. It doesn’t appear that having more certainty (i.e. 30% versus the 10% survey) impacted decisions to evacuate, as there is still no correlation between higher availability heuristics and evacuation (N = 11, r = 0.08, p = 0.81).

Comparing Both Surveys

In the first study, there was only a marginally significant difference between evacuation responses in the 10% and 30% surveys. In this study, the independent samples t-test resulted in no statistically significant difference between the 10% (N = 41, M = 0.66, SD = 0.16) and 30%

81

(N = 40, M = 0.64, SD = 0.18) surveys (t(77.6) = 0.39, p = 0.7, d = 0.12). While the original focus of the research was on making decisions under uncertainty, the results show that in this case, they don’t appear to make a difference. In an effort to explore what the data actually is trying to show, the study focuses on the similar results for the mild and severe flood scenarios. A paired samples t-test shows a statistically significant difference between the mild (N = 81, M =

0.66, SD = 0.16) and severe (N = 81, M = 0.64, SD = 0.18) flood scenarios (t(80) = 3.02, p =

0.01, d = 0.34). Despite changing the graphics, these results show no change in evacuation decisions between study 1 and study 2, with participants still evacuating more in mild flood scenarios over more severe scenarios.

Study 3

In order to make sense of why participants still want to evacuate more often in mild flood conditions over more severe scenarios, this third study looks at another potential factor driving their decision-making: whether distance to the flood source is the strategy used. In the mild flood scenario, the start (or edge) of all flood height zones are actually pushed closer to the ocean than in the more severe scenario, possibly influencing decisions.

Methods

Materials and Design

As the results from the first two studies revealed that the different probabilities did not strongly affect the decisions participants made on evacuating, this study only used one probability survey (10% chance), instead focusing on the distance to the flood source and different flood height zones. Due to the angled nature and more complex curves along the original coastlines, the maps were redrawn to be straighter both vertically and horizontally (see

82

Figure 3.13). When placing points on each map, having fairly straight coastlines allows for the distance (the focus of this study) to be more standardized to all locations along the coast.

Figure 3.13. Revised map with the straighter coastline.

Each of the points were placed at 6 equally spaced intervals moving farther away from the ocean to explore how distance may impact the results. The points were randomly placed along the axis parallel to the coastline at their specified distance. For example, maps with the vertical coastline had points placed randomly along a Y, or vertical axis (thus parallel) at each specified distance from the ocean. For experimental repetition, the different maps were created with the ocean at all four orientations: north, west, east, and south (see Figure 3.14).

83

Figure 3.14. Map showing 6 points equally spaced apart, moving away from the ocean on each of the four map orientations. Actual points used in the study were randomly chosen and placed on the axis parallel to the coastline.

To test whether distance or the flood height category influenced decisions, two map conditions were created so that some of the map points are at the same distance from the ocean but placed in a different flood height category. This occurred at distances 1 and 4, with distance 1 being the closest to the ocean.

84

Figure 3.15. Flood maps showing the mild (top) and more severe (below) scenarios as well as the 6 points placed in equally spaced distances moving farther from the ocean (left to right). Points at distance 1 (closest to ocean) and distance 4 have locations in two different flood height categories (red and orange, and yellow and orange respectively).

The following table (see Table 3.5) shows the breakdown of points at each distance, flood height category, how many points for experimental repetition, and in the mild or severe scenarios. For each of the four map orientations, this resulted in 20 maps, leaving a final count of 80 maps for the study.

Table 3.5. Breakdown of the placement of the different points including their distance from the ocean, the flood height category they fall within, the number of points randomly placed at each distance, and in which scenario they occur (mild or severe). Distance Flood Height Number of Points Mild or Severe Category for Repetition Scenario 1 Red 3 3 Severe 1 Orange 3 3 Mild 2 Orange 2 1 Mild, 1 Severe 3 Orange 2 1 Mild, 1 Severe 4 Orange 3 3 Severe 4 Yellow 3 3 Mild 5 Yellow 1 1 Mild, 1 Severe 6 Yellow 1 1 Mild, 1 Severe

85

As with the first two studies, study 3 was the same format, however using only one probability survey (10%) and changing the graphics to the revised maps for all evacuation questions.

Participants

Participants were again gathered from Amazon Mechanical Turk (AMT) with the same qualifications and limitations on location. Each person was again paid $2.00, and any participants who responded to study one or two were removed from further analysis. On average, it took 12 minutes to complete the 10% survey. In total, there were 57 participants. Upon inspecting the map reading questions of each survey, eighteen participants were omitted.

Keeping in mind some individual overlap among the following participants, four participants could not answer which flood color was associated with a specific range, three who couldn’t tell which flood color was higher, one who couldn’t answer which flood color had the highest flood height, one incorrectly guessed what the flooding would be at a specific point, ten who said yes to every question in the evacuation section, one who said no to all, and two who said yes to all evacuation questions but 1 or 2. These omissions resulted in 39 participants for the third study

(23 males and 16 females). The average age of the participants was 32 years old, ranging from

19 to 55 years old.

Results

Gender proved again to show no difference between female (N = 16, M = 0.63, SD =

0.20) and male (N = 23, M = 0.69, SD = 0.23) evacuation decisions in percent for an independent samples t-tests (t(35.12) = 0.77, p = 0.44, d = -0.28). Subjective numeracy skills ranged from a value of 2.5 to 6 of a possible range of 1 to 8 and a mean of 4.64. Following the first two studies, there was no correlation between the SNS and evacuation responses (N = 39, r = -0.25, p =

86

0.13). Twenty of the thirty-nine participants had previous hurricane experience. Of those twenty, their availability heuristics ranged from 4 to 16 details remembered with a mean score of 8.

Opposite of the original hypothesis, the availability heuristic had a moderate negative correlation with the responses to evacuate (N = 20, r = -0.47, p = 0.04). While participants are somewhat less likely to evacuate the more they can remember from previous hurricanes, it is worth exploring in future studies how prior experiences, both positive and negative, may impact decisions with more participants. For example, people who previously chose to not evacuate for an approaching storm and had a negative experience such as severe flooding and threat to life may want to evacuate more often in future hurricanes. Adversely, someone who chose to evacuate and felt they made the right choice (i.e. a positive experience) may be likely to do so again in future instances.

Table 3.6 and Figure 3.16 show that participants did use distance from the flood source

(the ocean) as a strategy for making evacuation decisions. The farther the point was from the ocean, the less they chose to evacuate.

Table 3.6. Evacuation results for each distance with a 10% probability. Distances ranged from 1 (closest to ocean) to 6 (farthest from ocean). Each value indicates the number of times the participants chose to evacuate or not with their respective percentages. Distance Evacuated Didn’t Evacuate

(% Yes) (% No)

1 891 (95.19%) 45 (4.81%)

2 278 (89.10%) 34 (10.90%)

3 236 (75.64%) 76 (24.36%)

87

4 502 (53.63%) 434 (46.37%)

5 104 (33.33%) 208 (66.67%)

6 61 (19.55%) 251 (80.45%)

Figure 3.16. Bar chart of the percent evacuated for each distance ranging from 1 (closest to ocean) to 6 (farthest from ocean)

A repeated measures ANOVA with a Greenhouse-Geisser correction (due to an inequality of variances) using the mean scores for percent evacuated based on distance were statistically significantly different (F(2.513, 95.510) = 72.894, p < 0.01). Thus, there is an overall significant difference in means. Post hoc tests with the Bonferroni correction shows that all the differences between mean evacuation percentages are statistically significant (p < 0.01) except between distance 1 and 2 closest to the ocean (0.95 and 0.89 respectively with a mean difference of 0.06)

88

p = 0.35. Therefore, at distances 3 and beyond there is a statistically significant reduction in the percent evacuated and at distances 1 and 2 rates remain high.

Participants clearly evacuate more the closer they are to the flood source. It is worth exploring how the different flood height categories, or colors, may impact their decisions. The following figure (see Figure 3.17) reveals the results for the participants’ evacuation decisions in each of the three tested flood height categories.

Figure 3.17. Bar chart of the percent evacuated for each flood height category by color (red, orange, and yellow respectively).

A repeated measures ANOVA with a Greenhouse-Geisser correction (due to an inequality of variances) shows that the mean scores for percent evacuated based on flood height category were statistically significantly different (F(1.573, 59.767) = 98.047, p < 0.01). Thus, there is an overall significant difference in means. Post hoc tests with the Bonferroni correction shows that all the differences between mean evacuation percentages are statistically significant (p < 0.01).

89

Therefore, there is a statistically significant reduction in the percent evacuated and at each subsequent flood height category (i.e. red to orange to yellow).

Table 3.7 and Figure 3.18 explore how the different flood height category results relate with regards to distance.

Table 3.7. Evacuation results for different flood heights with a 10% probability. Each value indicates the number of times the participants chose to evacuate or not with their respective percentages. Point Locations Evacuated Didn’t Evacuate (% (% Yes) No) Overall 357 (32.69%) 735 (67.31%) Distance 4 192 (41.03%) 276 (58.97%) Yellow Distance 5 104 (33.33%) 208 (66.67%) Distance 6 61 (19.55%) 251 (80.45%) Orange Overall 1282 (82.18%) 278 (17.82%) Distance 1 445 (95.09%) 23 (4.91%) Distance 2 278 (89.10%) 34 (10.90%) Distance 3 236 (75.65%) 76 (24.36%) Distance 4 310 (66.24%) 158 (33.76%) Red Overall 446 (95.30%) 22 (4.70%) (Distance 1)

90

Figure 3.18. Bar chart of the percent evacuated for each distance ranging from 1 (closest to ocean) to 6 (farthest from ocean) and flood height category by color (red, orange, and yellow respectively).

Generally, participants appear to use distance as the major strategy for making decisions, as even in Figure 3.18, there are differences within each flood height category the farther the distance. To ascertain whether there is a difference between the different flood height colors at the same distance (this occurs at both distance 1 and distance 4), two paired samples t-tests were run.

When comparing the red flood height at distance 1 (N = 39, M = 0.95, SD = 0.10) to the orange flood height at distance 1 (N = 39, M = 0.95, SD = 0.17), there is no statistically significant difference in the paired t-test output (t(38) = -0.09, p = 0.9, d = 0.00). No matter what flood height, participants evacuate at the same percentages (95%) when close to the flood source.

Looking at distance 4, the orange flood height color (N = 39, M = 0.66, SD = 0.39) is statistically significant in a difference from the yellow flood height color (N = 39, M = 0.41, SD = 0.41) between their evacuation results in a paired samples t-test (t(38) = 4.83, p = 0.01, d = 0.77). See

91

Figure 3.19 for an example of the red and orange flood height zone at distance 1 and the orange and yellow flood height zone at distance 4.

Figure 3.19. Maps showing different flood height zones at distance 1 (top, red and orange respectively) and distance 4 (bottom, orange and yellow respectively).

92

Discussion

Study one found that gender and numeracy skills do not impact decisions to evacuate.

The measured availability heuristics for prior hurricane experience with flooding only show a marginally significant correlation with evacuation that is moderately positive in the 30% survey.

It may be possible that these participants with a higher availability heuristic only tend to overestimate the probability, or likelihood of the flooding, when the probability itself is more certain (i.e. 30% not 10%). Future work should investigate the potential connection between availability heuristics and the probability of flooding as well as looking into both positive and negative prior experiences. Regarding the probability itself, there was only a marginally significant difference for an independent samples t-test between the 10% and 30% survey with an average evacuation of 61% and 68%, respectively. It may be that participants will have more differences with a larger gap between the probabilities or that they simply overlook or purposely ignore uncertainty all together and focus more on other aspects to make decisions such as the flood height category or distance to the flood source. Alternatively, in a flooding scenario for some participants they may believe that even a small risk is still worth evacuating despite any inconvenience, thus not regarding the differences between the probabilities. In future work, explicitly visualizing uncertainty on the map so that it isn’t as easy to overlook or ignore can help researchers better understand how uncertainty influences the decision-making process.

Beyond uncertainty, the results indicate that participants do pay attention to the flood height category, evacuating most in the red flood zone closest to the ocean and least in the blue zone found farthest from the ocean in both the 10% and 30% probability surveys. A noteworthy finding in the first study is that there were higher evacuation rates overall in the mild flood scenario over the severe scenario in both of the first two surveys (10% and 30% probability). The independent samples t-test confirmed this finding, with the orange flood height zone in particular

93

having higher rates in the mild scenario. This led to the exploration of the maps and a discovery that an artifact of using real data, albeit slightly altered in range for each flood height zone to create artificial mild and severe scenarios, resulted in the orange zone touching the ocean, breaking apart the nearest red flood height zone.

In study two, the maps in the mild scenario were redrawn to close the gap in the highest, red flood zone so that only the highest flood zone occurred next to the ocean. With the same setup, the results revealed that again gender, numeracy skills, and in this instance, the availability heuristics and the probability of the flooding did not impact decisions to evacuate. With the original intention of this research being an exploration into how uncertainty may impact evacuation, the main focus shifted, as the probability simply wasn’t a factor based on the statistical results. The data did in fact show that participants again evacuated more in the higher flood zones. Additionally, despite changing the maps, evacuation was still higher overall in the mild scenarios. These findings motivated the setup for the final study, attempting to disentangle how flood height zone and distance may be correlated to the evacuation decisions participants make.

The final study utilized new maps that examined the impact of equally spaced points at distances moving farther from the ocean. With each of these points falling within the three major flood zones that showed differences in decisions (red, orange, and yellow respectively), the results explored how distance or flood height may change the outcomes. Again, gender and numeracy skills proved to have no influence on decisions. Unlike study 1 and 2, the availability heuristic had opposite findings, with a moderate negative correlation to evacuation. As there are potentially many factors in this study that may interact with the availability heuristic, it is worth exploring in the future what, if any, impact availability heuristic may have on evacuation. The statistically significant results from the study do in fact show that distance and to some degree

94

the flood height zone impact decision-making. Each distance moving farther from the ocean resulted in fewer participants choosing to evacuate. The results from a repeated-measures

ANOVA showed a statistically significant difference between all the means except between distance 1 and 2 where evacuation percentages were both very high. As the flood height categories (red, orange, and yellow) tested do produce the same outcome as the distance results, it was potentially due to the fact that these flood heights occur in progressively further locations from the ocean as well (red to orange to yellow). Interestingly, however, there are differences in evacuation within each flood height category showing that the zone may be a secondary decision-making factor. For instance, some points at distances 1, 2, 3, and 4 all fall within the orange 6-9 foot flood zone however points at each distance within this zone have progressively lower evacuation rates. This is the same outcome for distance 4, 5, and 6 all within the yellow zone. Furthermore, at distance 1, there is no difference between the red and orange zone results.

This indicates that distance supersedes flood height, which in this case is directly next to the flood source (i.e. distance 1). However, at distance 4, the orange zone (6-9 feet of flooding) does in fact have significantly higher evacuations than the yellow flood zone (3-6 feet of flooding).

Looking across all these studies, the results appear to show that participants make evacuation decisions first based on the distance to the flood source and once farther from this source, they next use the flood height zone as a strategy to choose to evacuate.

Outlook

These three studies have yielded some findings, however it is important to note a potential limitation in the research. In traditional laboratory experiments (e.g. studies run within classrooms and other offline recruiting approaches) it can be difficult, ineffective, and inefficient with recruitment, scheduling and costs (monetary and time) (Schmidt, 2010). Certainly not using

95

students exclusively (who may not be homeowners nor have experience with hurricane flooding) is a nice step away from the often-used traditional pool in cartographic experiments.

Crowdsourcing can help human subjects researchers have easier access to the massive online population, avoiding many of the difficulties of these traditional studies. “These factors combine to create an environment where human subjects research can potentially occur orders of magnitude more quickly and cheaply than has previously been possible.” (p. 1) However,

Schmidt outlines some potential issues in data quality due to motivational factors. Participants may have different motivations for participating, thus resulting in some variation in how they may answer questions, or the attention they give to a particular task. Crump, McDonnell, and

Gureckis (2013) discuss issues that are hard or impossible to control online. These include the

“presence of distractions, problems with display, pausing for long periods in the middle of the task, and misreading or misunderstanding of instructions.” (p. 1) Furthermore, there is still a digital divide in the world with large populations who don’t have access to a computer or know how to use it well. Knowing the type of population who are willing to serve as participants on

Amazon Mechanical Turk (AMT) is important. Research from Ipeirotis (2010) compared AMT workers (turkers) to the general population. This research found that turkers were generally younger, more female, of lower income, and from smaller families than the general population.

Follow up studies are important to ascertain how this may impact the results of this research.

Another limitation in this research lies in the number of participants who actually have previous experience in hurricane flooding. For those with experience, there were 22 people of a total 78 in study one (approximate 10 per survey), 19 people of a total 81 in study two

(approximately 10 per survey), and 20 of a total 39 in study three resulting in 61 people of 198 overall. With such small numbers in each study, it is hard to find any statistical significance for

96

this small sub-population. Future research can better explore only participants with experience for more realistic results.

Though the original intent of this research was foremost to explore how people make decisions under uncertainty when interacting with hurricane flood maps, the findings of the studies showed that, at least in this scenario, uncertainty didn’t drive the strategy that participants used to make their evacuation decisions. Rather than continuing to pursue uncertainty, study two and three began to follow and examine the results as they unfolded. In this sense, another interesting aspect was found: participants focus mainly on the distance to the flood source, or ocean, ignoring the severity of the storm and the probability of flooding (or uncertainty). Once farther from the shore, participants start to make decisions based on the flood height levels. This outcome leads to follow up research questions that appear across many areas of uncertainty research. Do people simply ignore uncertainty in the current National Huricane Center storm surge maps? Furthermore, is uncertainty hard to comprehend, thus they avoid utilizing this to aid in their decision-making? Would better visualizations that explicitly show uncertainty help users overcome this issue? Or is it simply not as important of a factor when people make decisions?

Disentangling the role uncertainty plays, if at all in decision-making for hurricane flooding, will continue to be an important research agenda so that people such as forecasters, emergency managers, and broadcast meteorologists can anticipate likely evacuation responses to hurricanes.

97

References

Baumann, D. D., & Sims, J. H. (1978). Flood Insurance: Some determinants of Adoption.

Economic Geography, 54(3), 189-196.

Broad, K., Leiserowitz, A., Weinkle, J., & Steketee, M. (2007). Misinterpretations of the "Cone

of Uncertainty" in Florida during the 2004 Hurricane Season. Bulletin of the American

Meteorological Society, 88(5), 651-667. doi:10.1175/BAMS-88-5-651

Center, N. H. (2014). National Hurricane Center to issue new storm surge map. Retrieved from

http://www.nhc.noaa.gov/news/20140131_pa_stormSurgeGraphic.pdf

Center, N. H. (2016). Potential Storm Surge Flooding Tips for Emergency Managers. Retrieved

from http://www.nhc.noaa.gov/surge/PotentialStormSurgeTips-em.pdf

Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon's Mechanical

Turk as a Tool for Experimental Behavioral Research. PLoS ONE, 8(3), 1-18.

Doswell III, C. A. (2004). Weather Forecasting by Humans-Heuristics and Decision Making.

Weather and Forecasting, 19(6), 1115-1126. doi:10.1175/WAF-821.1

Fagerlin, A., Zikmund-Fisher, B. J., Ubel, P. A., Jankovic, A., Derry, H. A., & Smith, D. M.

(2007). Measuring Numeracy without a Math Test: Development of the Subjective

Numeracy Scale. Medical Decision Making, 27(5), 672-680.

Hirschberg, P. A., Abrams, E., Bleistein, A., Bua, W., Monache, L. D., Dulong, T. W., . . .

Stuart, N. (2011). A Weather and Climate Enterprise Strategic Implementation Plan for

Generating and Communicating Forecast Uncertainty Information. Bulletin of the

American Meteorological Society, 92(12), 1651-1666.

Ipeirotis, P. (2010). Demographics of Mechanical Turk. New York University Working Paper.

Retrieved from

98

Jackson, E. L. (1981). Response to earthquake hazard. Environment and Behavior, 13(4), 387-

416.

Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under Uncertainty: Heuristics and

Biases. Cambridge, United Kingdom: Cambridge University Press.

Keller, C., Siegrist, M., & Gutscher, H. (2006). The Role of the Affect and Availability

Heuristics in Risk Communication. Risk Analysis, 26(3), 631-639.

O'Connor, R. E., Yarnal, B., Dow, K., Jocoy, C. L., & Carbone, G. J. (2005). Feeling at Risk

Matters: Water Managers and the Decision to Use Forecasts. Risk Analysis, 25(5), 1265-

1275.

Orlove, B. S., Broad, K., & Meyer, R. (2010). Assessing the Effectiveness of the Cone of

Probability as a Visual Means of Communicating Scientific Forecasts. Paper presented at

the American Geophysical Union, San Francisco, California.

Pang, A. T., Wittenbrink, C. M., & Lodha, S. K. (1997). Approaches to Uncertainty

Visualization. The Visual Computer, 13(8), 370-390.

Rappaport, E. N., Franklin, J. L., Avila, L. A., Baig, S. R., Beven II, J. L., Blake, E. S., . . .

Tribble, A. N. (2009). Advances and Challenges at the National Hurricane Center.

Weather and Forecasting, 24(2), 395-419. doi:10.1175/2008WAF2222128.1

Schmidt, L. A. (2010). Crowdsourcing for Human Subjects Research. Paper presented at the

CrowdConf 2010, San Francisco, CA.

Service, N. W. (2013). Definition of the NHC Track Forecast Cone Retrieved from

http://www.nhc.noaa.gov/aboutcone.shtml

Severtson, D. J., & Myers, J. D. (2013). The Influence of Uncertain Map Features on Risk

Beliefs and Perceived Ambiguity for Maps of Modeled Cancer Risk from Air Pollution.

Risk Analysis, 33(5), 818-837.

99

Sherman-Morris, K., Antonelli, K. B., & Williams, C. C. (2015). Measuring the Effectiveness of

the Graphical Communication of Hurricane Storm Surge Threat. Weather, Climate, and

Society, 7(1), 69-82. doi:10.1175/wcas-d-13-00073.1

Siegrist, M., & Gutscher, H. (2006). Flooding Risks: A Comparison of Lay People's Perception

and Expert's Assessments in Switzerland. Risk Analysis, 26(4), 971-979.

Slovic, P. (1999). Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment

Battlefield Risk Analysis, 19(4), 689-701.

Stephens, E. M., Edwards, T., L., & Demeritt, D. (2012). Communicating probabilistic

information from climate model ensembles - lessons from numerical weather prediction.

Wiley Interdisciplinary Reviews: Climate Change, 3(5), 409-426.

Tversky, A., & Kahneman, D. (1973). Availability: A Heuristic for Judging Frequency and

Probability. Cognitive Psychology, 5(2), 207-232. doi:10.1016/0010-0285(73)90033-9

Weinstein, N. D. (1989). Effects of Personal Experience on Self-Protective Behavior.

Psychological Bulletin, 105, 31-50.

Zaleskiewicz, T., Piskorz, Z., & Borkowska, A. (2002). Fear or Money? Decisions on Insuring

Oneself Against Flood. Risk Decision and Policy, 7(3), 221-233.

Zikmund-Fisher, B. J., Ubel, P. A., Smith, D. M., Derry, H. A., McClure, J. B., Stark, A., . . .

Fagerlin, A. (2008). Communicating side effect risks in a tamoxifen prophylaxis decision

aid: The debiasing influence of pictographs Patient Education and Counseling, 73, 209-

214.

100

Overall Conclusion and Outlook

Deitrick and Edsall (2008) find that the term uncertainty is an issue across multiple disciplines often defined through various terms: data quality, accuracy, precision, error, vagueness, ambiguity, etc. Rather than endeavor to further define various types of geospatial uncertainty as many researchers have tackled, the first paper in this dissertation instead focuses on the domains that comprise the research field of visualizing these geospatial uncertainties. The relationships among the domains are explored and a visual summary of this typology is presented. When utilized for a single research article, the visual summary depicts what research domains are covered at a glance. The second paper shows the utility of the visual summary by incorporating it in an introductory article to summarize three separate research articles found in a special issue on visually-supported reasoning under uncertainty. The final paper employs a systematic approach to evaluate storm surge flood maps and how map characteristics, prior experience, and individual differences may impact evacuation decisions made under uncertainty.

While these three papers fill current gaps in uncertainty visualization research, another major area not discussed in the previous papers is the recent surge of big data research. With major advances in technology, we are amassing data in the digital realm at rates faster than the time it takes to figure out what to do with it. This includes analysis to, for instance, derive new knowledge, information, meaning, etc., about the world in which we live in. Furthermore, the data is often too complex or large to make initial sense of it. In order to deal with the large nature of big data, pre-processing may add more uncertainty and quality problems to the already existing uncertainties in the raw data (e.g. introduced through data reduction, rounding, aggregation) (Keim, Mansmann, Schneidewind, & Ziegler, 2006). With other characteristics of big (variety, velocity, complex) data, it is difficult to find ways to compute or encode data uncertainty because its complexity or real-time nature. Furthermore, it may even change during

101

the various stages of the data-visualization process (data collection, pre-processing, visualization, etc.). In order to visually represent uncertainty, one must be able to measure or assess and encode it (MacEachren et al., 2005a), and if research cannot identify these complex uncertainties, we cannot accurately inform users of the uncertainty in their data. The following points outline some challenges and open research areas in uncertainty visualization in light of big data:

1. Big data can be extremely complex in its uncertainty. Consider research by Vieweg,

Hughes, Starbird, and Palin (2010) that found uncertainty in the location of tweets

because of relative references (e.g. “we are on the western central edge of town, so we

are a fair distance from any water for now” (p. 1084)). This type of uncertainty across

massive amounts of data pose an extreme challenge for researchers to attempt to mitigate

some of the uncertainty and find ways to visualize it. In analyzing large numbers of

tweets or other social text, individual experiences, meanings of words, and contextual

scenarios may all play a role in the use of language, further impacting how researchers

must deal with the uncertain nature of this type of big data.

2. Methodological challenges in both uncertainty and visualization arise due to the immense

scale and complexity of the big data. Uncertainty is often multifaceted and arises and

changes at different points in the process from raw data to visualization. As MacEachren

et al. (2005a) outline, methods must be developed to visually represent multivariate

uncertainty. However, due to the difficulty in visualizing multivariate uncertainty, most

visualizations only depict a single type of uncertainty. Consider a similar scenario

described by Watkins (2000) where a researcher is 90% confident (or 10% uncertain) of

the latitude and longitude coordinates of an object but only 70% confident (or 30%

uncertain) of the object’s elevation. If a cartographer decided to only represent one

element of uncertainty on a map, he/she could average the confidence for an overall

102

uncertainty (i.e. [90 + 90 + 70]/[100 + 100 + 100]) resulting in 83% confidence of a

location, or 17% uncertainty of a location. From this example, it becomes clear that

representing all aspects of geospatial data uncertainty can provide users with a better

understanding of the underlying data.

3. The majority of current uncertainty visualizations employ techniques for discrete objects

in space. However, many types of geospatial data are continuous in nature and new

techniques must be researched to effectively display uncertainty for continuous

phenomena. Continuous visualization of uncertainty may have been largely ignored to

this point because these types of visualizations are much more challenging than discrete

representations (Pang et al., 1997). For example, the National Hurricane Center maps

from the third paper show hurricanes as discrete objects with distinct edges despite being

continuous in nature. Finding better ways to represent the varying geospatial uncertainty

beyond the current approaches will help people make more informed decisions and better

understand the data.

4. Computation of big data is a major challenge for visual analytics (Choo & Park, 2013).

Especially in real-time visualization, some data is simply too large to run different types

of analyses even with high performance computing alleviating some of the issue. Thus,

users cannot as readily identify uncertainty in the data if it cannot be processed in a

reasonable amount of time. Fisher, Popov, Drucker, and Schraefel (2012) tackle this issue

by doing incremental analyses and visualize the uncertainty with estimates on the

incremental data.

5. With the emergence of fields like , geovisual analytics, and especially

big data, research has slowly shifted away from solely being a cartographic paradigm of

communication (i.e. the cartography communicating geographic information to the map

103

user) to one of more data exploration. There is often no single message to be

communicated, but rather a large dataset to be explored by analysts and users to find new

patterns or insight into the data. Thus, current visualizations should facilitate exploratory

learning. However, if data patterns aren’t transparent (because they need to be explored)

how will this impact the communication (or lack thereof) of uncertainty to users?

6. New evaluation methodologies need to be developed, extended, and employed in order to

ascertain the level of understanding users have when interacting with big data uncertainty

visualizations. Currently a lot of research on uncertainty visualization uses evaluations

that, for example, have users perform a simple decision-making task without support for

more exploratory learning within an uncertainty visualization and do not identify whether

users truly comprehend the uncertainty.

Despite geospatial uncertainty visualization being a research area for decades now, maps and research in GIScience still largely ignore uncertainty and its visualization. Thankfully, in recent years uncertainty has been revitalized as a major research agenda and more people are acknowledging it. There is much work to be done in understanding its role in visualization and the best ways to present it to map users so that it is both understandable and useful.

104

References

Aerts, J. C. J. H., Clarke, K., & Keuper, A. (2003). Testing Popular Visualization Techniques for

Representing Model Uncertainty. Cartography and Geographic Information Science,

30(3), 249-261.

Aerts, J. C. J. H., Goodchild, M. F., & Heuvelink, G. B. M. (2003). Accounting for Spatial

Uncertainty in Optimization with Spatial Decision Support Systems. Transactions in GIS,

7(2), 211-230.

Allendes Osorio, R., & Brodlie, K. W. (2008). Contouring with uncertainty. Paper presented at

the Theory and Practice of Computer Graphics 2008. Proceedings.

Andrienko, G., Andrienko, N., & Wrobel, S. (2007). Visual analytics tools for analysis of

movement data. SIGKDD Explor. Newsl., 9(2), 38-46. doi:10.1145/1345448.1345455

Baumann, D. D., & Sims, J. H. (1978). Flood Insurance: Some determinants of Adoption.

Economic Geography, 54(3), 189-196.

Bertin, J. (1973). Sémiologie graphique: Les diagrammes-Les réseaux-Les cartes.

Bisantz, A. M., Cao, D., Jenkins, M., & Pennathur, P. R. (2011). Comparing Uncertainty

Visualizations for a Dynamic Decision-Making Task. Journal of Cognitive Engineering

and Decision Making, 5(3), 277-293.

Bisantz, A. M., Marsiglio, S. S., & Munch, J. (2005). Displaying Uncertainty: Investigating the

Effects of Display Format and Specificity. Human Factors: The Journal of the Human

Factors and Ergonomics Society, 47(4), 777-796.

Bisantz, A. M., Stone, R. T., Pfauta, J., Fouse, A., Farry, M., Roth, E., . . . Thomas, G. (2009).

Visual Representations of Meta-Information. Journal of Cognitive Engineering and

Decision Making, 3(1), 67-91.

105

Boller, R. A., Braun, S. A., Miles, J., & Laidlaw, D. H. (2010). Application of uncertainty

visualization methods to meteorological trajectories. Earth Science Informatics, 3(1-2),

119-126.

Börner, K., Chen, C., & Boyack, K. W. (2003). Visualizing knowledge domains. Annual review

of information science and technology, 37(1), 179-255.

Börner, K., & Theriault, T. (2012). Places and spaces: Mapping science: Ind.

Boukhelifa, N., Bezerianos, A., Isenberg, T., & Fekete, J.-D. (2012). Evaluating sketchiness as a

visual variable for the depiction of qualitative uncertainty. IEEE Transactions on

Visualization and Computer Graphics, 18(12), 2769-2778.

Broad, K., Leiserowitz, A., Weinkle, J., & Steketee, M. (2007). Misinterpretations of the "Cone

of Uncertainty" in Florida during the 2004 Hurricane Season. Bulletin of the American

Meteorological Society, 88(5), 651-667. doi:10.1175/BAMS-88-5-651

Brodlie, K., Osorio, R. A., & Lopes, A. (2012). A review of uncertainty in data visualization

Expanding the frontiers of visual analytics and visualization (pp. 81-109): Springer.

Brus, J., Voženílek, V., & Popelka, S. (2013). An Assessment of Quantitative Uncertainty

Visualization Methods for Interpolated Meteorological Data Computational Science and

its Applications - ICCSA 2013 (Vol. 7974, pp. 166-178): Springer Berlin Heidelberg.

Center, N. H. (2014). National Hurricane Center to issue new storm surge map. Retrieved from

http://www.nhc.noaa.gov/news/20140131_pa_stormSurgeGraphic.pdf

Center, N. H. (2016). Potential Storm Surge Flooding Tips for Emergency Managers. Retrieved

from http://www.nhc.noaa.gov/surge/PotentialStormSurgeTips-em.pdf

Chilès, J.-P., & Delfiner, P. (2009). Geostatistics: Modeling Spatial Uncertainty (Vol. 497): John

Wiley & Sons, Inc.

106

Choo, J., & Park, H. (2013). Customizing Computational Methods for Visual Analytics with Big

Data IEEE Computer Graphics and Applications, 33(4), 22-28.

Cobo, M. J., López-Herrera, A. G., Herrera-Viedma, E., & Herrera, F. (2011). An approach for

detecting, quantifying, and visualizing the evolution of a research field: A practical

application to the fuzzy sets theory field. Journal of Informetrics, 5(1), 146-166.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and psychological

measurement, 20(1), 37-46.

Couclelis, H. (2003). The certainty of uncertainty: GIS and the limits of geographic knowledge.

Transactions in GIS, 7(2), 165-175.

Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon's Mechanical

Turk as a Tool for Experimental Behavioral Research. PLoS ONE, 8(3), 1-18.

Deitrick, S. (2012). Evaluating implicit visualization of geographic uncertainty for public policy

decision support. Proceedings AutoCarto 2012. AutoCarto 2012, 16-18.

Deitrick, S. (2013). Uncertain Decisions and Continuous Spaces: Outcomes Spaces and

Uncertainty Visualization Understanding Different Geographies (pp. 117-134): Springer

Berlin Heidelberg.

Deitrick, S., & Edsall, R. (2006). The Influence of Uncertainty Visualization on Decision

Making: An Empirical Evaluation. In A. Riedl, W. Kainz, & G. A. Elmes (Eds.),

Progress in Spatial Data Handling: 12th International Symposium on Spatial Data

Handling (pp. 719-738). Berlin, Heidelberg: Springer Berlin Heidelberg.

Deitrick, S., & Edsall, R. (2008). Making Uncertainty Usable: Approaches for Visualizing

Uncertainty Information, in Geographic Visualization: Concepts, Tools and Applications.

In M. Dodge, M. McDerby, & M. Turner (Eds.), Geographic Visualization: Concepts,

Tools and Applications (pp. 277-291). Chichester, UK.: John Wiley & Sons, Ltd.

107

Dooley, M. A., & Lavin, S. J. (2007). Visualizing method-produced uncertainty in isometric

mapping. Cartographic Perspectives(56), 17-36.

Doswell III, C. A. (2004). Weather Forecasting by Humans-Heuristics and Decision Making.

Weather and Forecasting, 19(6), 1115-1126. doi:10.1175/WAF-821.1

Duckham, M., Mason, K., Stell, J., & Worboys, M. (2001). A formal approach to imperfection in

geographic information. Futurescapes, 25(1), 89-103.

Elmqvist, N., & Tsigas, P. (2007). CiteWiz: a tool for the visualization of scientific citation

networks. Information Visualization, 6(3), 215-232.

Fagerlin, A., Zikmund-Fisher, B. J., Ubel, P. A., Jankovic, A., Derry, H. A., & Smith, D. M.

(2007). Measuring Numeracy without a Math Test: Development of the Subjective

Numeracy Scale. Medical Decision Making, 27(5), 672-680.

Finger, R., & Bisantz, A. M. (2002). Utilizing graphical formats to convey uncertainty in a

decision-making task. Theoretical Issues in Ergonomics Science, 3(1), 1-25.

Fisher, D., Popov, I., Drucker, S. M., & Schraefel, M. (2012). Trust, Me, I'm Partially Right:

Incremental Visualization Lets Analysts Explore Large Datasets Faster. Paper presented

at the Conference on Human Factors in Computing Systems, Austin, Texas.

Fisher, P. F. (1999). Models of uncertainty in spatial data. Geographical Information Systems, 1,

191-205.

Friedman, A. (2014). The relationship between research method and visual display: a study of

conference proceedings in the field of knowledge organization. Information Research: An

International Electronic Journal, 19(4), n4.

Gahegan, M., & Ehlers, M. (2000). A Framework for the Modelling of Uncertainty Between

Remote Sensing and Geographic Information Systems. ISPRS Journal of

Photogrammetry and Remote Sensing, 55(3), 176-188.

108

Gansner, E. R., Hu, Y., & Kobourov, S. G. (2009). Gmap: Drawing graphs as maps. Paper

presented at the International Symposium on Graph Drawing.

Gerharz, L. E., & Pebesma, E. J. (2009). Usability of Interactive and Non-Interactive.

Geoinformatik 2009 Konferenzband, 223-230.

Gershon, N. (1998). Visualization of an Imperfect World. IEEE Computer Graphics and

Applications, 18(4), 43-45.

Gusfield, D. (1997). Algorithms on strings, trees and sequences: computer science and

computational biology: Cambridge university press.

Harley, J. B. (1989). Deconstructing the Map. Cartographica, 26(2), 1-20.

Herbertson, A. J. (1905). The Major Natural Regions: An Essay in Systematic Geography. The

Geographical Journal, 25(3), 300-310.

Hirschberg, P. A., Abrams, E., Bleistein, A., Bua, W., Monache, L. D., Dulong, T. W., . . .

Stuart, N. (2011). A Weather and Climate Enterprise Strategic Implementation Plan for

Generating and Communicating Forecast Uncertainty Information. Bulletin of the

American Meteorological Society, 92(12), 1651-1666.

Hope, S., & Hunter, G. J. (2007). Testing the effects of positional uncertainty on spatial

decision‐making. International Journal of Geographical Information Science, 21(6),

645-665.

Howard, D., & MacEachren, A. M. (1996). Interface design for geographic visualization: Tools

for representing reliability. Cartography and Geographic Information Systems, 23(2), 59-

77.

Hunter, G. J., & Goodchild, M. F. (1993). Managing Uncertainty in Spatial Databases: Putting

Theory into Practice. Journal of Urban and Regional Information Systems Association,

5(2), 52-62.

109

Ipeirotis, P. (2010). Demographics of Mechanical Turk. New York University Working Paper.

Retrieved from

Jackson, E. L. (1981). Response to earthquake hazard. Environment and Behavior, 13(4), 387-

416.

Johnson, C. R., & Sanderson, A. R. (2003). A Next Step: Visualizing Errors and Uncertainty.

IEEE Computer Graphics and Applications, 23(5), 6-10.

Journel, A. G. (1996). Modelling uncertainty and spatial dependence: Stochastic imaging.

International Journal of Geographical Information Systems, 10(5), 517-522.

Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under Uncertainty: Heuristics and

Biases. Cambridge, United Kingdom: Cambridge University Press.

Keim, D. A., Mansmann, F., Schneidewind, J., & Ziegler, H. (2006). Challenges in Visual Data

Analysis. Paper presented at the Information Visualization, London, England.

Keller, C., Siegrist, M., & Gutscher, H. (2006). The Role of the Affect and Availability

Heuristics in Risk Communication. Risk Analysis, 26(3), 631-639.

Kinkeldey, C., MacEachren, A. M., Riveiro, M., & Schiewe, J. (2015). Evaluating the effect of

visually represented geodata uncertainty on decision-making: systematic review, lessons

learned, and recommendations. Cartography and Geographic Information Science, 1-21.

Kinkeldey, C., MacEachren, A. M., & Schiewe, J. (2014). How to assess

of uncertainty? A systematic review of geospatial uncertainty visualisation user studies.

The Cartographic Journal, 51(4), 372-386.

Kinkeldey, C., Mason, J., Klippel, A., & Schiewe, J. (2014). Evaluation of noise annotation

lines: using noise to represent thematic uncertainty in maps. Cartography and

Geographic Information Science, 41(5), 430-439.

110

Kubíček, P., & Šašinka, Č. (2011). Thematic uncertainty visualization usability–comparison of

basic methods. Annals of GIS, 17(4), 253-263.

Larkin, J. H., & Simon, H. A. (1987). Why a Diagram is (Sometimes) Worth Ten Thousand

Words. Cognitive Science, 11(1), 65-100.

Leitner, M., & Buttenfield, B. P. (2000). Guidelines for the Display of Attribute Certainty.

Cartography and Geographic Information Science, 27(1), 3-14.

Lodha, S. K., Charaniya, A. P., Faaland, N. M., & Ramalingam, S. (2002). Visualization of

Spatio-Temporal GPS Uncertainty within a GIS Environment. Paper presented at the

SPIE Conference, Orlando, Florida.

MacEachren, A. M. (1992). Visualizing uncertain information. Cartographic Perspectives,

13(3), 10-19.

MacEachren, A. M. (2015). Visual analytics and uncertainty: Its not about the data. Paper

presented at the EuroVis Workshop on Visual Analytics, Cagliari, Italy.

MacEachren, A. M., Robinson, A., Hopper, S., Gardner, S., Murray, R., Gahegan, M., & Hetzler,

E. (2005a). Visualizing Geospatial Information Uncertainty: What We Know and What

We Need to Know. Cartography and Geographic Information Science, 32(3), 139-160.

MacEachren, A. M., Robinson, A., Hopper, S., Gardner, S., Murray, R., Gahegan, M., & Hetzler,

E. (2005b). Visualizing Geospatial Information Uncertainty: What We Know and What

We Need to Know. 32(3), 139-160.

MacEachren, A. M., Roth, R. E., O'Brien, J., Li, B., Swingley, D., & Gahegan, M. (2012). Visual

Semiotics & Uncertainty Visualization: An Empirical Study. Visualization and

Computer Graphics, IEEE Transactions on, 18(12), 2496-2505.

Markham, B. (2012). West with the Night: Open Road Media.

111

Mason, J. S., Retchless, D., & Klippel, A. (2017). Domains of Uncertainty Visualization

Research: A Visual Summary Approach. Cartography and Geographic Information

Science, 44(4), 296-309.

Muehrcke, P. C. (1974). Map Reading and Abuse. Journal of Geography, 73(5), 11-23.

NCGIA. (2015). NCGIA Overview. Retrieved from

http://www.ncgia.ucsb.edu/about/overview.php

O'Connor, R. E., Yarnal, B., Dow, K., Jocoy, C. L., & Carbone, G. J. (2005). Feeling at Risk

Matters: Water Managers and the Decision to Use Forecasts. Risk Analysis, 25(5), 1265-

1275.

Orlove, B. S., Broad, K., & Meyer, R. (2010). Assessing the Effectiveness of the Cone of

Probability as a Visual Means of Communicating Scientific Forecasts. Paper presented at

the American Geophysical Union, San Francisco, California.

Pang, A. T., Wittenbrink, C. M., & Lodha, S. K. (1997). Approaches to Uncertainty

Visualization. The Visual Computer, 13(8), 370-390.

Potter, K., Rosen, P., & Johnson, C. R. (2012). From Quantification to Visualization: A

Taxonomy of Uncertainty Visualization Approaches. In A. M. Dienstfrey & R. F.

Boisvert (Eds.), Uncertainty Quantification in Scientific Computing (pp. 226-249):

Springer Berlin Heidelberg.

Rappaport, E. N., Franklin, J. L., Avila, L. A., Baig, S. R., Beven II, J. L., Blake, E. S., . . .

Tribble, A. N. (2009). Advances and Challenges at the National Hurricane Center.

Weather and Forecasting, 24(2), 395-419. doi:10.1175/2008WAF2222128.1

Retchless, D. (2012). Mapping Climate Change Uncertainty: Effects on Risk Perceptions and

Decision Making. Paper presented at the AGU Fall Meeting Abstracts.

112

Riveiro, M. (2007). Evaluation of Uncertainty Visualization Techniques for Information Fusion.

Paper presented at the 10th International Conference on Information Fusion, Quebec,

Canada.

Roth, R. E. (2009). The Impact of User Expertise on Geographic Risk Assessment under

Uncertain Conditions. Cartography and Geographic Information Science, 36(1), 29-43.

Ruginski, I. T., Boone, A. P., Padilla, L. M., Liu, L., Heydari, N., Kramer, H. S., . . . Creem-

Regehr, S. H. (2016). Non-Expert Interpretations of Hurricane Forecast Uncertainty

Visualizations. Spatial Cognition and Computation(Special Issue on Visually-Supported

Spatial Reasoning with Uncertainty).

Şalap-Ayça, S., & Jankowski, P. (2016). Integrating Local Multi-Criteria Evaluation with

Spatially Explicit Uncertainty-Sensitivity Analysis. Spatial Cognition and

Computation(Special Issue on Visually-Supported Spatial Reasoning with Uncertainty).

Sanyal, J., Zhang, S., Bhattacharya, G., Amburn, P., & Moorhead, R. J. (2009). A user study to

compare four uncertainty visualization methods for 1d and 2d datasets. Visualization and

Computer Graphics, IEEE Transactions on, 15(6), 1209-1218.

Sanyal, J., Zhang, S., Dyer, J., Mercer, A., Amburn, P., & Moorhead, R. J. (2010). Noodles: A

Tool for Visualization of Numerical Weather Model Ensemble Uncertainty. IEEE

Transactions on Visualization and Computer Graphics, 16(6), 1421-1430.

Schmidt, L. A. (2010). Crowdsourcing for Human Subjects Research. Paper presented at the

CrowdConf 2010, San Francisco, CA.

Senaratne, H., Gerharz, L., Pebesma, E., & Schwering, A. (2012). Usability of Spatio-Temporal

Uncertainty Visualisation Methods. In J. Gensel, D. Josselin, & D. Vandenbroucke

(Eds.), Bridging the Geographic Information Sciences (pp. 3-23): Springer Berlin

Heidelberg.

113

Service, N. W. (2013). Definition of the NHC Track Forecast Cone Retrieved from

http://www.nhc.noaa.gov/aboutcone.shtml

Severtson, D. J., & Myers, J. D. (2013). The Influence of Uncertain Map Features on Risk

Beliefs and Perceived Ambiguity for Maps of Modeled Cancer Risk from Air Pollution.

Risk Analysis, 33(5), 818-837.

Sherman-Morris, K., Antonelli, K. B., & Williams, C. C. (2015). Measuring the Effectiveness of

the Graphical Communication of Hurricane Storm Surge Threat. Weather, Climate, and

Society, 7(1), 69-82. doi:10.1175/wcas-d-13-00073.1

Shneiderman, B., & Plaisant, C. (1998). Treemaps for space-constrained visualization of

hierarchies.

Siegrist, M., & Gutscher, H. (2006). Flooding Risks: A Comparison of Lay People's Perception

and Expert's Assessments in Switzerland. Risk Analysis, 26(4), 971-979.

Skeels, M., Bongshin, L., Smith, G., & Robertson, G. G. (2010). Revealing Uncertainty for

Information Visualization. Information Visualization, 9(1), 70-81.

Skupin, A., & Agarwal, P. (2008). Introduction: What is a Self-Organizing Map? Self-

Organising Maps (pp. 1-20): John Wiley & Sons, Ltd.

Skupin, A., Biberstine, J. R., & Börner, K. (2013). Visualizing the topical structure of the

medical sciences: a self-organizing map approach. PLoS ONE, 8(3), e58779.

Slingsby, A., Dykes, J., & Wood, J. (2011). Exploring Uncertainty in Geodemographics with

Interactive Graphics. IEEE Transactions on Visualization and Computer Graphics,

17(12), 2545-2554.

Slovic, P. (1999). Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment

Battlefield Risk Analysis, 19(4), 689-701.

114

Smith, J., Retchless, D., Kinkeldey, C., & Klippel, A. (2013). Beyond the Surface: Current

Issues and Future Directions in Uncertainty Visualization Research. Paper presented at

the International Cartographic Conference, Dresden, Germany.

Spiegelhalter, D., Pearson, M., & Short, I. (2011). Visualizing Uncertainty About the Future.

Science, 333(6048), 1393-1400.

Stephens, E. M., Edwards, T., L., & Demeritt, D. (2012a). Communicating probabilistic

information from climate model ensembles - lessons from numerical weather prediction.

Wiley Interdisciplinary Reviews: Climate Change, 3(5), 409-426.

Stephens, E. M., Edwards, T. L., & Demeritt, D. (2012b). Communicating probabilistic

information from climate model ensembles—lessons from numerical weather prediction.

Wiley Interdisciplinary Reviews: Climate Change, 3(5), 409-426. doi:10.1002/wcc.187

Stoll, M., Krüger, R., Ertl, T., & Bruhn, A. (2013). Racecar Tracking and its Visualization Using

Sparse Data. Paper presented at the 1st Workshop on Sports Data Visualization at IEEE

VIS, Atlanta, Georgia.

Thomas, J. J., & Cook, K. A. (2006). A visual analytics agenda. Computer Graphics and

Applications, IEEE, 26(1), 10-13.

Thomson, J. R., Hetzler, E. G., MacEachren, A. M., Gahegan, M. N., & Pavel, M. (2005). A

Typology for Visualizing Uncertainty. Paper presented at the Conference on Visualization

and Data Analysis, San Jose, California.

Tufte, E. R. (2001). The visual display of quantitative information. Cheshire, Conn.: Graphics

Press.

Tufte, E. R., & Graves-Morris, P. R. (1983). The visual display of quantitative information (Vol.

2): Graphics press Cheshire, CT.

115

Tversky, A., & Kahneman, D. (1973). Availability: A Heuristic for Judging Frequency and

Probability. Cognitive Psychology, 5(2), 207-232. doi:10.1016/0010-0285(73)90033-9

Tversky, A., & Kahneman, D. (1974). Judgment Under Uncertainty: Heuristics and Biases.

Science, 185(4157), 1124-1131.

Van Oort, P., & Bregt, A. (2005). Do Users Ignore Spatial Data Quality? A Decision‐Theoretic

Perspective. Risk Analysis, 25(6), 1599-1610.

Viard, T., Caumon, G., & Lévy, B. (2011). Adjacent versus coincident representations of

geospatial uncertainty: Which promote better decisions? Computers and Geosciences,

37(4), 511-520.

Vieweg, S., Hughes, A., Starbird, K., & Palin, L. (2010). Microblogging During Two Natural

Hazard Events: What Twitter May Contribute to Situational Awareness. Paper presented

at the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA.

Vullings, L., Blok, C., Wessels, C., & Bulens, J. (2013). Dealing with the uncertainty of having

incomplete sources of geo-information in spatial planning. Applied and

policy, 6(1), 25-45.

Ware, C. (2004). Information Visualization: Perception for Design: Morgan Kaufmann

Publishers Inc.

Watkins, E. T. (2000). Improving the Analyst and Decision-Maker's Perspective through

Uncertainty Visualization. (Master of Science in Computer Science and Software

Engineering), Air Force Institute of Technology, Ohio.

Weinstein, N. D. (1989). Effects of Personal Experience on Self-Protective Behavior.

Psychological Bulletin, 105, 31-50.

Wittenbrink, C. M., Pang, A. T., & Lodha, S. K. (1996). Glyphs for visualizing uncertainty in

vector fields. IEEE Transactions on Visualization and Computer Graphics, 2(3).

116

Zaleskiewicz, T., Piskorz, Z., & Borkowska, A. (2002). Fear or Money? Decisions on Insuring

Oneself Against Flood. Risk Decision and Policy, 7(3), 221-233.

Zhang, J., & Goodchild, M. F. (2002). Uncertainty in geographical information: CRC press.

Zikmund-Fisher, B. J., Ubel, P. A., Smith, D. M., Derry, H. A., McClure, J. B., Stark, A., . . .

Fagerlin, A. (2008). Communicating side effect risks in a tamoxifen prophylaxis decision

aid: The debiasing influence of pictographs Patient Education and Counseling, 73, 209-

214.

Zuk, T., & Carpendale, S. (2006). Theoretical analysis of uncertainty visualizations. Paper

presented at the SPIE, San Jose, CA.

117

Vita

Jennifer Smith Mason

Education Ph.D. The Pennsylvania State University, University Park, PA May 2018 Geography M.S. San Diego State University, San Diego, CA May 2011 Geographic Information Science B.A. University of California Los Angeles, Los Angeles, CA June 2009 Geography

Teaching Assistant Teacher, UCLA Department of Geography and UCLA Extension. Introduction to GIS (Spring 2017, Summer 2017, Fall 2017), Intermediate GIS (Spring 2017, Fall 2017), Advanced GIS (Spring 2017), Cartography (Summer 2017, Winter 2018), World Regions (Summer 2017).

Graduate Teaching Assistant, Department of Geography at The Pennsylvania State University. Mapping our Changing World (Fall 2015, Spring 2016)

Research Graduate Research Assistant, Department of Geography, The Pennsylvania State University, Summer 2012 - Summer 2014, Fall 2016

Graduate Research Assistant, Department of Geography, San Diego State University, Spring 2010 – Summer 2011

Undergraduate Research Assistant, Jet Propulsion Laboratory/UCLA Institute of the Environment, Spring 2009 – Summer 2009

Undergraduate Research Assistant, Department of Geography, UCLA, Winter 2009

Selected Awards and Distinctions Bunton-Waller Graduate Fellowship at Pennsylvania State University, Fall 2011 – Spring 2012 and Fall 2014 – Spring 2015

Big Data Social Science IGERT Trainee (Fall 2012 – Summer 2014)

Richard Wright Cartography Award and Scholarship, San Diego State University, April 2011

Graduated Cum Laude from University of California Los Angeles