Copyright by Nadine Suzanne Gibson 2019 The Dissertation Committee for Nadine Suzanne Gibson certifies that this is the approved version of the following dissertation:

Understanding the Mechanics of Democracy: Evaluating and Improving Election Administration in the U.S.

Committee:

Daron R. Shaw, Supervisor

Tse-min Lin

Robert C. Luskin

Brian E. Roberts

Charles H. Stewart III Understanding the Mechanics of Democracy: Evaluating and Improving Election Administration in the U.S.

by

Nadine Suzanne Gibson

DISSERTATION Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of

DOCTOR OF PHILOSOPHY

THE UNIVERSITY OF TEXAS AT AUSTIN May 2019 This dissertation is dedicated to the memory of Professor Craig Leonard Brians. Thank you for teaching me how to cite country music lyrics in scholarly research, why I should convince my students not to vote, and how to properly perform a traffic stop. Acknowledgments

This dissertation would have been impossible without the mentorship of Prof. Daron Shaw. I couldn’t have hoped for a better advisor (and at times therapist). I would like to thank Prof. Roberts for his kind words when I needed them most. Prof. Lin has always been my supporter. I cannot thank him enough for the time he spent helping me pass my methodology exam and the late night he spent editing my Master’s Report. Je voudrais remercier Prof. Luskin de me donner de nombreuses occasions de d´egusterdes vins fan- tastiques. I am also very grateful to have the leading election administration scholar Prof. Charles Stewart serve as a member of my committee. Addi- tionally, I would like to thank the MIT Election Data and Science Lab for awarding me a New Initiatives Grant that made this work possible.

In addition to my committee members, I would like to thank Prof. Jessee and Prof. Leal. Prof. Jessee was kind enough to help me prepare for my job talk on short notice. I want to profusely thank Prof. Leal for hours of lively discussion (even if it was about cat memes). I valued every PPI meeting and the opportunity to focus on finishing this dissertation this past semester.

I would not have considered myself capable of being considered for acceptance into a Ph.D. program, let alone finishing one, if not for my un- dergraduate mentors at Virginia Tech. Mainly, Prof. Craig Brians and Prof.

v Karen Hult. In line with Virginia Tech’s motto, Ut Prosim, I hope that my research will serve the broader public.

This dissertation was only possible with the help of Rachel German. She helped me every step of the way. I would also like to thank Yul Min Park, Luke Perez, and Joe Tafoya for their companionship on the fifth floor. I also couldn’t have asked for better friends than Barea Sinno, Robert Shaf- fer, Ken Miller, Ben Rondou, Rodolfo Disi Pavlic, Christina Bambrick, and Kristie Kelly. Thank you to the folks in the Public and International Affairs Department at UNCW for giving me a hard deadline.

Lastly and most importantly, thank you mom, dad, and Darla. Now we can move on with our lives.

vi Understanding the Mechanics of Democracy: Evaluating and Improving Election Administration in the U.S.

Publication No.

Nadine Suzanne Gibson, Ph.D. The University of Texas at Austin, 2019

Supervisor: Daron R. Shaw

Since the controversial 2000 presidential election, there has been an increasing demand for information about improving the conduct of American elections. With only a decade-and-a-half of sustained attention by political sci- entists, our understanding of election administration has grown greatly. Most notably, research has focused on politically salient issues like turnout, residual vote rates, voter identification, and . Although these issues are important and contemporaneous, persistent less visible problems plague the system and attract scant scholarly attention.

Given these major concerns about the American system, we know little about the relationship between election administration and partic- ipation. This dissertation examines how variation in election administration is related to the costs associated with voting and confidence in our electoral

vii institutions at both the aggregate and individual levels. In this dissertation, I describe a theory about the relationship between turnout in elections and the public value local election administration. I operationalize public value through measures of investment of resources into local elections. Although imperfect, spending on voting systems is a proxy for the quality of an elec- tion system. In support of my theory, an original data set was created using variables related to the costs and quantity of voting equipment. This data on voting equipment costs is the first of its kind in scholarly election administra- tion research. Throughout the three empirical chapters of this dissertation, evidence is presented for how high-quality election administration can have both short-term and long-term effects on voters’ propensity to turnout.

viii Table of Contents

Acknowledgments v

Abstract vii

List of Tables xii

List of Figures xiv

Chapter 1. Introduction 1 1.1 Democracy in Peril ...... 7 1.2 Plan of Dissertation ...... 12

Chapter 2. Theoretical Framework 15 2.1 The Calculus of Voting ...... 17 2.1.1 The “D” Term ...... 20 2.2 Fitting Election Administration into the Calculus ...... 23 2.3 Lowering the Costs of Voting ...... 27 2.3.1 Individual Elements ...... 27 2.3.2 Structural/ Legal Elements ...... 29 2.3.2.1 Registration ...... 30 2.3.2.2 Mode of Voting and Accessibility . . . . 31 2.3.3 Election Administration Elements ...... 33 2.3.3.1 Local Election Officials ...... 34 2.3.3.2 Poll Workers ...... 35 2.3.3.3 Polling Places ...... 36 2.3.3.4 Voting Machines ...... 38 2.3.3.5 Financial Costs ...... 39 2.4 Increasing Confidence in American Elections ...... 42 2.4.1 High Quality Elections as a Public Value ...... 43

ix Chapter 3. The Influence of Voting System Quality on County- Level Turnout 49 3.1 Introduction ...... 49 3.2 Election Administration in the 21st Century ...... 50 3.3 Hypotheses ...... 53 3.4 Turnout in a Hyper-Federalized System ...... 58 3.4.1 Spatial Autocorrelation ...... 60 3.4.2 Lagrange Multiplier Test for Spatial Dependence . . . . 64 3.4.3 Spatial Autoregressive Error Regression Model (SAER) 66 3.5 Data and Variables ...... 67 3.5.1 Independent Variables ...... 69 3.6 Results ...... 73 3.6.1 Descriptive Data Analysis ...... 73 3.6.2 Spatial Regression Analysis ...... 74 3.7 Conclusion and Discussion ...... 77

Chapter 4. How Election Services Vendors Influence the Voting Experience 80 4.1 Introduction ...... 80 4.1.1 Election Services Vendors ...... 81 4.1.2 Services ...... 86 4.2 Theoretical Framework ...... 87 4.2.1 High Quality Elections as a Public Value ...... 89 4.3 Data and Method ...... 91 4.3.1 Dependent Variables ...... 91 4.3.1.1 and Poll Worker Performance . . 91 4.3.1.2 Lines ...... 93 4.3.1.3 Confidence Vote Counted as Cast ...... 94 4.3.2 2016 Survey of the Performance and American Elections 95 4.3.3 County Contracts for the Acquisition of a Voting System 96 4.3.4 Model Estimation ...... 98 4.4 Results ...... 100 4.5 Who Purchases Vendor Services? ...... 106 4.6 Conclusion and Discussion ...... 109

x Chapter 5. How Previous Election Experiences Influence Indi- viduals’ Decisions to Participate in Future Elections111 5.1 Introduction ...... 111 5.2 Measuring Confidence in Elections ...... 112 5.2.1 Winner’s Effect ...... 113 5.2.2 Election Experiences ...... 114 5.2.2.1 Voting Machines ...... 115 5.2.2.2 The Voting Experience ...... 116 5.3 The Endurance of the Election Experience in the Minds of Voters117 5.4 Data and Methods ...... 119 5.5 Results ...... 125 5.5.1 Pre-Election Confidence and Recollection of the 2016 Vot- ing Experience on 2018 Turnout ...... 125 5.5.2 Pre-Election Confidence and Recollection of the 2018 Vot- ing Experience on Post-Election Confidence in Vote Counted As Cast ...... 129 5.6 Conclusion and Discussion ...... 132

Chapter 6. Conclusion 136 6.1 Spending on Elections ...... 137 6.2 Improving Measurement in Election Administration ...... 138

Appendix 144

Bibliography 162

xi List of Tables

1.1 Reasons for Not Voting Cited By Non-Voters in the 2016 Gen- eral Election ...... 5 1.2 Characteristics of Non-Voters Who Did Not Vote for Election Administration Reasons and Eligible Voters, 2016 General Elec- tion ...... 6 1.3 Perception of Common Illegal Voting Activities ...... 8

3.1 Areas of Election Administration That Are in Significant Need of Improvement or an Upgrade, 2013 ...... 52 3.2 Moran’s I statistics ...... 64 3.3 Lagrange Multiplier Test for Spatial Dependence ...... 66 3.4 State Averages of Voting Equipment Variables ...... 74 3.5 Spatially Autocorrelated Error Regression Model for County- Level Turnout in 2016 ...... 76

4.1 Average Spending on Vendor Licenses and Services by County in Texas, 2016 ...... 84 4.2 One-Way Analysis of Variance Results for Percent Vendor Li- censes and Services in Accordance with Urban-Rural Classification 85 4.3 Frequency Distributions of Dependent Variables: Polling Place Performance, Poll Worker Performance, Line Wait Time, and Confidence in Vote Counted as Cast...... 92 4.4 Proportional Odds and Partial Proportional Odds Regression Models of Wait Time in Line ...... 101 4.5 Proportional Odds Regression Model of Polling Place and Poll- Worker Performance ...... 103 4.6 Proportional Odds Regression Model of Confidence in Vote Counted as Cast ...... 105 4.7 Binary Logistic Regression Models of Vendor Service Purchase by Counties ...... 107

5.1 Confidence in Vote Counted as Cast Survey Items ...... 121

xii 5.2 Binary Logistic Regression Model of Turnout, 2018 ...... 126 5.3 Proportional Odds Regression Model of Post-Election Confi- dence in Vote Counted As Cast, 2018 ...... 130 5.4 Causal Mediation Analysis with Nonparametric Bootstrap Con- fidence Intervals ...... 133

A1 Comparison of Dependent Variable Distributions between the Subset in This Analysis and the Original Data Set...... 148 A2 Comparison of Confidence in County Vote Counted as Cast be- tween In-Person and Non-voters...... 148 A3 Results of a Brant Test for the Proportional Odds Assumption 150 A4 Cumulative Logistic Regression Models of Post-Election Confi- dence in Vote Counted as Cast with , 2018 ...... 151 A5 Partial Proportional Odds Regression Model of Post-Election Confidence in Vote Counted As Cast, 2018 ...... 152

xiii List of Figures

1.1 Presidential Election Turnout as a Proportion of Voting Eligible Population, 1789-2016 ...... 3 1.2 Aggregate Freedom House Scores of Developed Democracies, 2006-2018 ...... 11

2.1 George Caleb Bingham’s County Election ...... 16 2.2 Theoretical Contribution ...... 17 2.3 Election Administration in the Calculus of Voting ...... 24 2.4 Postulated Effect of Spending on Turnout ...... 25 2.5 Country-Level Ranking of Perceptions of Confidence in Elec- toral Authorities ...... 44

3.1 Boundary Map of Austin, TX ...... 61 3.2 Average Cost of Voting Systems Per Registered Voter in 2016 70

5.1 Hypothesized Ongoing Relationship Between Confidence in Elec- tions, Voting Experiences, and Individual-level Turnout. . . . 118 5.2 Diagram of Factor Loading of Election Experience Variables . 122 5.3 Predicted Probability of Voting in 2018 for Voters and Non Voters in 2016 by Pre-Election Confidence ...... 128

6.1 Theoretical Contribution ...... 136 6.2 Stacked Barplot of Polling Place Performance and Poll-Worker Performance, 2016-2018 ...... 141 A1 Election Equipment Vendor Market Share, 2016 ...... 144 A2 Barplots of 2018 University of Texas Module CCES Variables Using in Analysis, Polling Place and Poll-Worker Performance 145 A3 Barplots of 2018 University of Texas Module CCES Variables Using in Analysis, Voting and Mode of Voting ...... 146 A4 Barplots of 2018 University of Texas Module CCES Variables Using in Analysis, Voting Equipment and Problems ...... 147

xiv A5 Post-Election Confidence in Vote Counted As Cast by Pre- Election Confidence of the Partial Proportional Odds Regres- sion Model Specification ...... 153

xv Chapter 1

Introduction

Many statesmen and scholars have opined that there is nothing more central to American democracy than civic participation. In his Gettysburg Address, President Lincoln characterized the American government as a “gov- ernment of the people, by the people, for the people.” Public participation in elections is “an instrumental activity through which citizens attempt to influence the government to act in ways the citizens prefer” (Verba and Nie 1972, p.102). In the same vein, turnout is generally considered the “grand indicator” for judging the health of a democracy (Burden 2014). According to this metric, civic participation in the United States in the twenty-first century is far from its high-water mark. Figure 1.1 presents a time series of turnout in presidential elections since the Founding. In the nineteenth century, turnout was upwards of 83%. Mass participation in the lower-strata of society was enhanced in nineteenth century America due to the prevalence of clientlism and tribalist politics in which political support is exchanged for material items or given freely due to social ties (Piven and Cloward 1988). This period of high turnout in the mid-nineteenth century is described by some historians as

1 a “golden era” of American democracy (Piven and Cloward 1988, p.28).1

It appears that the 1896 election was the last American election in which voters were highly motivated to go to the polls. The subsequent and enduring decrease in turnout following this election can be attributed to a major election administration reform: the . The adoption of the secret ballot by states in the late 1880s through 1890s prevented parties from monitoring voters (Heckelman 1995). Voters were no longer provided the ma- terial benefits (i.e. bribes) associated with casting a ballot for a party. As a result, voters were increasingly described as “apathetic” (McGerr 1986).

Additionally, during this period of exceptionally low turnout, voting was particularly burdensome for minorities and working-class whites (Piven and Cloward 1988). Election administration was used strategically by states, particularly in the former Confederate states, to disenfranchise groups. Pro- cedural mechanisms like systems and suffrage requirements such as poll taxes and grandfather clauses prevented minorities, immigrants, and the poor from casting a ballot.2

About a century later, Congress passed the National Voter Registration

1Interestingly, during this time of high participation, most Americans were ineligible to cast a ballot. Although Black males were enfranchised through the Fifteenth Amendment, they were de facto prohibited from voting through restrictive registration policies and in- timidation. Black turnout did not systematically improve until the passage Voting Rights Act of 1965. Throughout the nineteen century, women were periodically given the right to vote in some states. Prior to the Civil War, both New Jersey and Kentucky had limited voting rights for women. Prior to the twentieth century, Wyoming, Utah, Colorado, and Idaho gave voting rights to women (Schons 2012). 2For a more detailed discussion of procedural and legal barriers to voting in the first-half of the twentieth century, please see Hannah et al. (1959).

2 Figure 1.1: Presidential Election Turnout as a Proportion of Voting Eligible Population, 1789-2016 3 Act (NVRA) in 1993, also known as the “Motor Voter”, with the intent to increase turnout by lowering the barriers to voting associated with registra- tion. In their analysis of the effects of the NVRA Brown and Wedeking (2006) find that more individuals were able to overcome obstacles in registering. As can be observed in Figure 1.1, however, turnout did not dramatically increase following the full implementation of the NVRA in 1996. Brown and Wedeking (2006) argue that by decreasing barriers to registration, more individuals that are unlikely to vote are included in the pool of registered voters. In terms of calculating turnout, these unlikely voters offset any major gains of the NVRA. Ultimately, Brown and Wedeking (2006) conclude that the relationship be- tween turnout and registration is more complicated than what conventional wisdom suggests.

This highlights the lack of understanding of how election procedures influence participation in the United States. Among non-voters surveyed in the 2016 Cooperative Congressional Election Study (CCES), about 12.1% of non-voters cited a reason directly related to election administration as the primary reason why they did not ultimately cast a ballot (Table 1.1). One might note that the types of citizens for which election administration appears to have an impact on their propensity to turnout are on the margins of the electorate. Table 1.2, presents the proportion of relevant subgroups composing non-voters who did not vote because of election administration reasons in the 2016 general election. Those who did not vote for election administration reasons were on average less white, more likely to earn less than $30,000, less

4 Table 1.1: Reasons for Not Voting Cited By Non-Voters in the 2016 General Election

Primary Reason Given for Not Voting % Did not like the candidates ...... 33.2 % I’m not interested ...... 15.3 % Sick or disabled ...... 7.7 % Toobusy ...... 6.6 % I did not feel that I knew enough about the choices ...... 5.5 % Out of town ...... 4.8 % Transportation ...... 3.0 % I did not have the correct form of identification a b ...... 2.5 % The line at the polls was too long a b ...... 2.2 % I requested but did not receive an a ...... 2.1 % I am not registered a ...... 1.9 % Distrust or disgust ...... 1.7 % I forgot ...... 1.7 % I was not allowed to vote at the polls, even though I tried a b 1.7 % I did not know where to vote a ...... 1.5 % Other ...... 1.0 % Religion ...... 0.9 % Bad weather ...... 0.7 % Absentee ballot issue a ...... 0.3 % Recent relocation ...... 0.3 % Don’t know/NA ...... 5.5 % Reasons related to election administration 12.1 % Reasons related to the election experience 6.4 % N= 5706 a Designates an election administration related issue b Designates an election experience related issue Note: Respondents were weighted by the post-election survey weight variable (“commonweight vv post”). Source: Cooperative Congressional Election Study, 2016

5 Table 1.2: Characteristics of Non-Voters Who Did Not Vote for Election Ad- ministration Reasons and Eligible Voters, 2016 General Election

Non-Voters Eligible Voters Characteristics (EA Reasons) Race White 58.2% 73.2% Black 20.5% 12.5% Hispanic 11.5% 7.1% Income Less than $10,000 16.8% 4.7% $10,000 - $19,999 12.4% 7.6% $20,000 - $29,999 10.3% 10.7% Highest Level of Education No HS 17.6% 8.3% HS Graduate 38.0% 32.1% Partisan Identification Democrat 35.4% 37.0% Republican 23.8% 27.8% Independent 26.2% 29.7% Other 0.6% 1.0% Not Sure 13.9% 4.5% Note: Respondents were weighted by the post-election survey weight variable (“commonweight vv post”). Source: Cooperative Congressional Election Study, 2016 likely to have a high school education, and less likely to know their partisan identification.

There exists a large literature on party mobilization which suggests that campaign activities are effective in mobilizing large numbers of voters to turnout. My line of inquiry does not refute these theories or empirical find- ings. Instead, I am interested in developing a behavioral model for those on the margins. What is problematic for American democracy is the fact that

6 there is a group of eligible voters are de facto disenfranchised. Regardless of the number of individuals for which election administration is an impor- tant determinant in choosing to participate, there are important questions about participatory behavior that have been left unexplored by social scien- tists. Specifically, this dissertation examines the effects of financial investment in electoral administration on turnout, particularly turnout among those who are already registered to vote or have the opportunity to vote on Election Day without prior registration (i.e. same-day registration).

1.1 Democracy in Peril

Underlying our theories of democracy is the assumption that if a voter decides that it is rational for them to cast a ballot, their vote will be counted as cast and their vote will have the amount of influence proportional to one over the total number of legal voters.3 Unfortunately, this is a strong assumption to make in the twenty-first century. As the results presented in Table 1.3 demonstrate, Americans think the U.S. system is still very far from ideal. A majority of Americans believe that election fraud occurs at a rate more frequent than “almost never” in the United States.

The notion that election fraud and unethical voting practices plague U.S. elections is not a recent phenomenon. In fact, ameliorating the reputation of elections has been a concern of political scientists and public administration

3One caveat to this claim is the use of the Electoral College (Article 2, Section 1, Clauses 2 and 3 of the U.S. Constitution) rather than the popular vote in choosing U.S. presidents.

7 Table 1.3: Perception of Common Illegal Voting Activities

Activity 2012 2014 2016 Voting more than once 60.09% 72.21% 53.11% Tampering of 61.52% 71.49% 53.10% Pretend to be someone else 63.25% 73.08% 55.72% Non-U.S. citizen voting 66.00% 77.17% 59.41% Absentee ballot fraud 72.48% 80.67% 66.70% Officials change vote count 62.17% 72.71% 53.32% Note: Percent of respondents who did not answer “It almost never occurs”. Source: Survey of the Performance of American Elections (Stewart 2013a, 2015, 2017a) scholars throughout the twentieth century. Joseph P. Harris, the leading public administration scholar during the 1930s (Key 1958), acknowledges the impact of election fraud on public confidence on the very first page of his book Election Administration in the United States (1934). Harris (1934, p.1) states,

“It is hardly necessary to point out that the presence of election frauds and sharp practices will undermine public morale and inter- est in civic affairs more quickly than any other condition”(Harris 1934, p.1).

To Harris, the U.S. system in the early twentieth century was far from ideal, which he described as follows:

“The ideal election administration is one which uniformly and reg- ularly produces honest and accurate results. There should never

8 be the slightest question about the integrity of the or doubt cast upon the honesty of the elections”(Harris 1934, p.1).

V.O. Key devotes an entire chapter in his seminal book Politics, Parties, and Pressure Groups (1958) to election administration. Key (1958, p.685) asserts,

“The American election system has gained an unenviable reputa- tion for fraud, although election fraud occurs far less frequently than is commonly supposed” (Key 1958, p.685).

Unfortunately, this reputation is not unfounded. The Emmy nominated HBO documentary Hacking Democracy publicized the issue of mal- functions. In the book which inspired the documentary, Harris (2004) docu- ments 100 cases of vote count irregularities in recent elections. When com- pounded with the vote count crisis in 2000, public has become more suspicious of how votes are counted in the United States. These concerns have endured since 2000, then candidate Donald Trump expressed in a campaign rally in 2016,

“Of course I would accept a clear election result, but I would also reserve my right to contest or file a legal challenge in the case of a questionable result”(Rappeport and Burns 2016).

In response, President Obama cautioned Trump to refrain from further dam- aging the reputation of American elections,

9 “When you try to sow the seeds of doubt in people’s minds about our elections, that undermines our democracy... You’re doing the work of our adversaries for them”(Rappeport and Burns 2016).

In the comparative politics context, Freedom House (2019) has noted a sig- nificant decline in American democracy since 2011 as a result of “Russian interference in US elections, domestic attempts to manipulate the , executive and legislative dysfunction”. In the past decade, the Free- dom in the World aggregate score of the United States has dropped a total of eight points. Freedom House attributes the United States’ score of 86/100 in 2019 to a decline in the “legitimacy of elections”. 4

To give context to the declining score of the United States, Freedom House (2019) ranks the United States in last place among the eight populous democracies considered “Free” in every Freedom in the World edition.5 This has not always been the case. In Figure 1.2, the trend in aggregate Freedom House scores of the United States deviates negatively from its “peers”.

The deterioration in American voting systems has profound conse- quences for the health of American democracy. The most egregious being a decline in political equality. The United States was founded on the notion

4President Trump’s rhetoric attacking Democratic voters, unsubstantiated accusations of voter fraud, and denial of Russian meddling was cited as the major contributor to the score in 2019. 5The other countries in this category (in descending order of Freedom in the World aggregate score) are Canada, Australia, Japan, Germany, the United Kingdom, France, and Italy.

10 Figure 1.2: Aggregate Freedom House Scores of Developed Democracies, 2006-2018 11 of political equality.6 Dahl (1998, p.38) outlines five necessary criteria of a democracy in which all members are considered politically equal:

1 Equal and effective modes for communicating views

2 Equal and effective opportunities to vote, and all votes must be counted as equal

3 Equal and effective opportunities to learn about policy alternatives

4 Control of the agenda

5 All, or most, adults should have the full rights of citizens

Dahl’s third item “equal and effective opportunities to vote, and all votes must be counted as equal” directly refers to the quality of the electoral insti- tutions. In any democracy, without healthy electoral institutions there cannot be political equality.

1.2 Plan of Dissertation

Given these major concerns about the American voting system, we know little about the relationship between election administration and partic- ipation. This dissertation examines how variation in election administration

6In the Declaration of Independence the authors declared as follows: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain inalienable Rights, that among these are Life, Liberty, and the pursuit of happiness.”

12 is related to the costs associated with voting and confidence in our electoral institutions at both the aggregate and individual levels. In the next chapter, I describe a theory about the relationship between turnout in elections and the public value local election administration. I operationalize public value through measures of investment of resources into local elections. Although imperfect, spending on voting systems is a proxy for the quality of an election system. In support of my theory, an original data set was created using vari- ables related to the costs and quantity of voting equipment. These data on voting equipment costs is the first of its kind in scholarly election administra- tion research. Using these data, three empirical chapters speak to the theory articulated in Chapter 2 using distinct methodological approaches.

Chapter 3 examines the impact of differences in resource allocation on turnout. I use the term ˆaresourceallocationˆaloosely to describe financial resources spent and labor mobilized to maintain or improve a jurisdictionˆas voting equipment. Now that federal funding provided by the Help America Vote Act (2002) in the form of grants-in-aid are no longer provided to locali- ties at the intended capacity, there is substantial variability in the amount of resources available to localities to spend on voting equipment. Using spatially weighted regression analysis of county-level turnout on election administra- tion variables, I find empirical evidence in support of turnout improvements through specific types of investments such as newer equipment and vendor services in the form voter education programs.

Chapter 4 investigates the impact of four broad categories of vendor

13 service packages (Election Day support, project management, training, and voter outreach programs) on individual-level election performance indicators in 14 states. Election equipment in the United States is exclusively purchased from private-sector vendors. When a jurisdiction purchases voting equipment, they are actually purchasing the hardware and software along with a variety of services for the initial implementation and long-term maintenance and support of the system. The results of logistic regression analyses of cross-sectional survey data from in-person voters in 2016 suggests that county purchases of election support and training courses from election vendors have a negligible effect on improving the odds of a better voter experience.

Chapter 5 uses individual-level survey data from the 2018 Cooperative Congressional Election Study to explore the relationship between prior elec- tion experiences and individuals’ decision to turnout. Central to this chapter is the question of whether increasing short-term costs of voting have long-term consequences for democratic participation. This study uses a within-subject panel design to identify individuals’ predispositions prior to the election and choice to participate as well as post-election evaluations of the election expe- rience. I find statistical evidence suggesting that positive election experiences, by way of an individual’s confidence in elections, are associated with increases in an individual’s propensity to turnout.

14 Chapter 2

Theoretical Framework

Despite the inherent racism and disenfranchisement of the majority of Americans through de jure and de facto means in the nineteenth century, there is no dispute that voting was an enjoyable experience for those who could par- ticipate. George Caleb Bingham’s painting titled “County Election” depicts the 1846 election in Saline County, Missouri (Figure 2.1). In Bingham’s por- trayal of the voting experience in nineteenth century America, Demos (1965) remarks:

“He evokes in a most powerful way the festive character of polit- ical activity in the Missouri region. Elections were obviously the occasion for much excitement and merriment; they provided a wel- come break in the ordinary routine of life. While the voting is actually in progress, we observe much earnest discussion” (Demos 1965, p.220).

This is in stark contrast to the image of voting in the twenty-first cen- tury. Elections have transformed from a “welcome break” to a fairly arduous chore. I posit that if the voting experience was viewed more positively, then more individuals would want to participate. Although there have been clear

15 Figure 2.1: George Caleb Bingham’s County Election 16 Figure 2.2: Theoretical Contribution

and compelling calls to reform election administration in the United States, there is relatively little known about the best mechanisms to do so. This dis- sertation presents a broad theory of how to improve turnout in the United States by improving local level election administration.

My theory builds upon the calculus of voting framework first formulated by Downs (1957). In contemporary politics, the calculus of voting has become the standard framework for understanding how election procedures influence participation in elections (Nagler 1991; Brady and McNulty 2011).

2.1 The Calculus of Voting

In the United States, voting-eligible citizens may choose to partici- pate in the electoral process or to abstain. The decision to vote is classically framed as a decision between satisfying an individual’s self-interest and the social good (Acevedo and Krueger 2004). In virtually every instance of voting in the United States, no one individual is essential to the achievement of a

17 particular election outcome (Shepsle 2010). In effect, the choice to partici- pate in democracy is a collective action dilemma. Collective action dilemmas are situations “in which the members of a group would benefit by working together to produce some outcome, but each individual is better off refusing to cooperate and reaping benefits from those who do the work” (Bianco and Canon 2015, p.7). Given this, the dominant strategy for any single individual is to abstain from voting.

In the case of electoral participation, those who abstain from voting are free riders because they do not incur the costs of voting(i.e. time and energy), yet they can still receive the same benefits of the election outcome as those who did cast a ballot (Uhlaner 1993). The issue of free riders in the electoral process is commonly referred to as the “.” Numerous scholars have attempted to understand the rationality in choosing to vote as a cost-benefit analysis in light of this paradox.1

Downs (1957, p.7) describes the rational political man as a man who “approaches every situation with one eye on the gains to be had, the other eye on costs, a delicate ability to balance them, and a strong desire to follow wherever rationality leads him.” Under the Downsian framework, voting in most instances it is irrational (Tullock 1967; Riker and Ordeshook 1968).2

1Alternative strategies for this decision exist (Downs 1957; Tullock 1967; Riker and Ordeshook 1968; Ferejohn and Fiorina 1974; Sigelman and Berry 1982; Blais and Young 1999). Ferejohn and Fiorina (1974) frame this decision in terms of minimizing regret rather than maximizing utility. 2Tullock (1967) notes that the frequently used claim that no one would vote under the Downsian formulation is incorrect. Following the logic of the formula, voting is most

18 According to Downs (1957) voting is only considered rational for an individual if P ∗ B > C (2.1) where B symbolizes the benefits of the victory of the individual’s preferred candidate relative to their opponent, P is the perceived probability that the individual’s vote will be decisive, and C symbolizes the costs associated to casting a ballot.

Sigelman and Berry (1982) find that the costs of voting is the most decisive term in the calculus of voting model; even more so than the benefits component. The data used to empirically test this model are survey data from the Comparative State Elections Project following the 1968 presidential election. They measure costs of voting by a dichotomous response to “For you personally, getting to the polls and waiting to vote usually takes a lot of time and effort”. Those who agreed that voting requires a lot of time and effort were much less likely to vote than those who disagreed.

The probability of a single participatory act of voting being consequen- tial to the election outcome is very small. The only way any single vote will be consequential is when a single vote will break or create a tie (Fowler and Kam 2007). In most races, however, the margin of victory is much greater than one vote (or even a handful of votes), which makes the actual expected utility of strategic for those most interested in politics (especially when the costs are high) as well as those who are eligible to vote in close elections and elections with small populations.

19 voting is quite low relative to the costs. In virtually all instances, it is reason- able to argue that the likelihood of a citizen’s vote determining the outcome of an election “is less than the chance he will be killed on the way to the poll” (Skinner 1976; Goodin and Roberts 1975; Dowding 2005). Under this logic, it is rational for most citizens to abstain from voting because the costs of voting for most citizens will, for all practical purposes, always outweigh the benefits (P ∗ B < C).3

2.1.1 The “D” Term

Even Downs (1957, p.267) admits that “democracy collapses” when no one votes. With that end in mind, Downs notes that it is indeed possible to rationally vote if individuals value being a good democrat. Riker and Or- deshook(1968) describe “civic duty”, which they label “D”, as being partly derived from the satisfaction of going to the polls. This term is an attempt to quantify the psychological benefits associated with participating in a democ- racy. With the addition a term for civic duty, D, voting is considered rational if P ∗ B + D > C (2.2) where P is the perceived probability that the individual’s vote will be decisive, B symbolizes the benefits of the victory of the individual’s preferred candidate, and C symbolizes the costs associated to casting a ballot. In order for the

3Sanders (1980) finds evidence in opposition to the notion that the P ∗ B term is a negligible predictor of turnout. Variables measuring B were stronger predictors of than those measuring civic duty (D) and the costs of voting (C).

20 choice to vote to become rational, the magnitude of D must be at least as large as the costs, C.

Although this is still an abstraction, the formulation of the calculus of voting in Equation 2.2 is a much more convincing indicator of the individual- level decision to vote. When the calculus is purely a function of the instru- mental aspects of voting, it is not predictive of why the majority of voters show up to the polls. Including a term to capture the expressive aspects of voting is particularly useful for understanding individual decisions to partici- pate in elections over large populations and elections that are not particularly competitive.

The term associated with “civic duty” is not strictly defined in the literature. Riker and Ordeshook (1968 p.28) note that their list of “posi- tive satisfactions” associated with “citizen duty” is not all-inclusive. As a result, political behaviorists have taken liberties in their conceptualizations of D. For example, Grofman (1993, p.95) articulates D as “psychic satisfaction for expressing solidarity with a candidate or position.” This interpretation of D emphasizes the expressive function of voting. This was meant to capture the satisfaction that citizens derive from the very act of voting the so-called “simple joy of voting” (Sanders, 1980) or the “expressive” rather than “instru- mental” aspect of voting (e.g., Crampton and Farrant 2004).

At the conclusion of a 2012 presidential debate, Bob Schieffer famously said, “Go vote now it will make you feel big and strong” (Schieffer 2012). These sentiments illustrate how the act of voting itself taps into feelings of

21 personal political efficacy. Election Day, compared to the average Tuesday, is an extraordinary event for most Americans. For many citizens, this is the only opportunity they have to influence government. Jurisdictions would not spring for “I Voted” stickers unless they were considered to be mementos for commemorating and (most importantly) expressing the choice to participate in a special experience.

Brennan and Lomasky (1993) use the example of cheering for sports teams as a common form of expressive behavior.4 People incur substantial costs to attend sporting events as an expression of their team support. Yet, this act is, for the most part, inconsequential in manipulating the actual outcome of any given match. The value for fans is then “purely expressive”.

The expressive nature of voting is well-known. In his essay Civil Dis- obedience, Henry David Thoreau describes his own motivations for voting:

“I cast my vote, perchance, as I think right: but I am not vitally concerned that the right should prevail ... Even voting for the right is doing nothing for it. It is only expressing to men feebly your desire that it should prevail” (Thoreau 1903, p.15).

Like Thoreau, the act of expressing one’s self at the does have some intrinsic value outside of influencing the outcome of an election for many Americans. For example, the expressive nature of voting can explain why

4Hamlin and Jennings (2011) note the scenario of cheering for a sports team is not appropriate for illustrating motivations of expressive voting behavior given that 1) fans rarely cheer against their team; 2) cheering is done in public, while voting is done in private.

22 voters in western states continued to vote in the presidential elections after the nationwide vote was already known (Brennan and Lomasky 1993).5 Due to the difference in time zones, voters arriving to the polls in the west would have already known that their vote had no possibility of being decisive. Given the calculus of voting as defined by Downs (1957), there would be no benefit outweighing the costs incurred by going to the polls. If the result was already known, the instrumental benefit of voting could no longer be the driving force motivating people to cast a ballot.

2.2 Fitting Election Administration into the Calculus

Despite sustained research on the calculus of voting for roughly the past sixty years, previous research has largely overlooked the role of election administration in improving the odds of an individual’s decision to participate in the voting process. As a result, there is ambiguity in the placement of elec- tion administration within the calculus of voting framework. This dissertation posits that election administration has a two-fold impact on the decision to participate in elections according to the calculus of voting framework.

1 Higher quality elections can decrease the costs associated with voting (“C”).

2 High-quality elections translate into more positive attitudes towards our elections system (“D”).

5As was the case in the presidential elections in 1980 and 1984.

23 Figure 2.3: Election Administration in the Calculus of Voting

First, the manner in which elections are conducted at the local can have substantial impacts on the costs of voting for certain individuals. The most obvious and effective way of increasing turnout among those who are registered is lowering the costs associated with casting a ballot. Second, the quality of elections has an impact on individuals’ feelings of confidence in the American electoral systems and their willingness to exercise their civic duty.

One method of improving the quality of any consumer is to increase the quality of its component parts. Similarly, I posit that by spending more resources (i.e. financial, human, etc.) on election administration, more people will be interested in participating in the democracy.

Bohan and Homey (1991) define the cost of quality as the total of all resources spent by any organization to assure that quality standards are met on a consistent basis. In general, the cost of quality can be decomposed into

24 Figure 2.4: Postulated Effect of Spending on Turnout

four areas: prevention, inspection and appraisal, internal failures, and external failures.6 The most effective strategy in minimizing the cost of quality is to invest heavily in prevention. Farooq et al. (2017 p.157) describe prevention costs as “all costs incurred to prevent non-conformance, such as those due to scheduled equipment maintenance, tool replacement, investments in worker training, and quality improvement programs”.

Unfortunately, the examination of the quality of elections in the United States is not a straight forward task. In the United States, election adminis- tration is largely conducted at the local level. Article 1, Section 4, Clause 1 of the U.S. Constitution, sometimes referred to as the Elections Clause, explic- itly grants states the responsibility of running elections.7 This responsibility, however, has devolved to the local level. Given that elections are conducted by

6In terms of the cost of quality for products, specifically, includes costs associated with continuous improvement processes, costs of system, production and service failures, and non-value added activities and wastage (Farooq et al. 2017). 7Article 1, Section 4, Clause 1 states “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing [sic] Senators”.

25 the states and more precisely the localities, there is not widespread uniformity in electoral procedure.

In terms of who is responsible for election administration, the United States utilizes a hyperfederalized system due to the ”blend of national, state, and local responsibility” (Ewald 2009, p.3). Ewald (2009, p.97) argues, ”uni- formity is actually not a central value of American elections.” This is consis- tent with the Madisonian conception of American popular sovereignty in that power is distributed throughout the system. The notion of a decentralized election system was not a controversial topic during ratification. Hyperfed- eralism is thought to “prevent purposeful, self-interested manipulation” and “prevent accidental systematic corruption” (Ewald 2009, p.115-116).

Theoretically, this lack of uniformity should lead to policy improve- ment because jurisdictions can act as laboratories of democracy. In reality, there is little sharing of empirical data and scientific research between juris- dictions. This makes sense when we consider that localities (often with little resources) are responsible for the vast majority of election conduct. Conse- quentially, leaving localities footing the bill of major election expenditures. Local election officials, those primarily in charge of elections, are faced with many institutional and budgetary restrictions with little concrete evidence for what will actually work the best.

26 2.3 Lowering the Costs of Voting

I conceptualize the costs of voting as a combination of factors within of three broad categories:

1. Individual

2. Structural/ Legal

3. Election Administration

The first two categories are well-defined in the turnout literature. The third category of election administration, however, has not been well articu- lated in the costs of voting framework.

2.3.1 Individual Elements

Voting is inherently costly (Downs 1957). Citizens who wish to vote must make time to do so. Every minute spent voting is a minute that is not spent on other activities, i.e., earning a wage. Voting is generally less costly for those who have more free time(i.e. retirees). Verba, Schlozman, and Brady (1995) find that free time does not appear to discriminate across socioeconomic stratification variables. Rather, free time is a function of life circumstances (i.e. job, working spouse, children). For example, free time is scarce for individuals with young children regardless of race or income.

Moreover, time and energy must be spent on becoming politically in- formed (Tullock 1967). Prior to voting, voters must independently seek out

27 information about the candidates and referenda they will be confronted with on the ballot. Depending on an individual’s prior level of civic skill (Verba, Schlozman, and Brady 1995) and political sophistication (Luskin 1990), the time required to gather sufficient information may be substantial.

Even though the amount of time spent on voting and acquiring the relevant information will vary from citizen to citizen, there is always a cost involved. The costs associated with voting are not limited to the time spent on the actual act of casting a ballot nor the time spent acquiring information. Costs can be incurred from a multitude of factors, such as the money spent on gas to get to the polling place, the postage needed for mailing an absentee ballot, or the resources needed to obtain an appropriate form of identification. Unfortunately, the costs of voting tend to disproportionately effect low-income voters and first-time voters. Given that most states require all voters be registered prior to arriving at the polling place, the costs associated with voting can include time and resources spent well in advance of Election Day.

Socioeconomic status (i.e. education, income, occupation) may help to mitigate the costs associated with voting. There is no doubt in the literature that socioeconomic status is positively correlated with the propensity to par- ticipate electorally (Verba and Nie 1972). It is well-documented that those in the lower-stratas of society are less likely to vote.

28 2.3.2 Structural/ Legal Elements

Despite the participatory nature of American democracy, the notion of universal suffrage is not embraced by the American public. Moreover, many lawmakers view voting as a privilege rather than a right (Wang 2012). As a result, policies of excluding groups from voting (most notable of which is convicted felons) have become normalized in American politics.

In the American conception of democracy, the citizen is seen as “the good and kindly father” who is responsibly informed of politics and intelli- gent enough to navigate his way to successfully casting a ballot (Rolfe 2012). It is under this guise of preventing those who are not “competent” or “in- formed” enough from voting that legal mechanisms making voting more costly (informational costs or opportunity costs) are not universally abhorrent to the typical American. Rather, legal reforms like voter identification are starkly partisan issues where political parties stand to benefit or be disadvantaged electorally as a primary policy outcome.

Many times, election laws forget the plights of those most burdened by legal obstacles. In line with Schattschneider’s (p.34-35, 1960) sentiment that the “flaw in the pluralist heaven is that the heavenly chorus sings with a strong upper-class accent”, the laws governing election administration are not always written with the aim of improving the accessibility of voting for those facing the most obstacles. Some members of state legislatures have been explicit in their intention to craft laws which limit access to the ballot. For instance, in an argument in supporting a bill to limit the ability of voters to update their

29 registration information, Florida State Senator Michael Bennett expressed,

“This is a hard-fought privilege. This is something people die for. You want to make it convenient? The guy who died to give you that right, it was not convenient. Why would we want to make it easier? I want’em, to fight for it. I want’em to know what it’s like. I want them to go down there, and have to walk across town to go over and vote” (as cited in Wang 2012).

As a result, an individual’s costs of voting are conditional on where they live. Burden (2014) characterizes the costs of voting generated by legal reforms in two ways:

1. Direct effects: costs imposed by the state and include registration re- quirements, polling locations and hours, and rules such as identification requirements.

2. Indirect effects: nongovernmental actors can indirectly raise or lower the costs of voting depending on how much information they provide and the social incentives they generate.

2.3.2.1 Registration

Voter registration in the United States is governed primarily by state law. Although there are some national laws governing voter registration pro- cedure, there is no requirement for states to implement voter registration poli-

30 cies.8 In fact, North Dakota does not have a voter registration system. Instead, voters present a valid form of identification containing the individual’s name, North Dakota residential address, and birth date (North Dakota Secretary of State 2017).

According to the North Dakota Secretary of State (2017), the moti- vation for the repeal of the registration requirement in 1951 was because the state did not want to “employ the initial restrictive and costly barrier of voter registration”. Likewise, there is scholarly evidence in support of the notion that lowering the costs associated with registration have positive effects on turnout (Burden 2014). Brians and Grofman (1999) examine the assumption that reducing registration barriers will lead to improvements in turnout among those at the bottom of the socioeconomic ladder. Contrary to popular belief, they find that the group with the greatest turnout expansion were middle-class voters.

2.3.2.2 Mode of Voting and Ballot Accessibility

Recently, there has been a movement in states to provide more alter- native means of voting besides traditional in-person voting on Election Day. Administration and Voting Survey data from the 2008 and 2010 elections, Kimball and Baybeck (2013) find that specialized methods of casting a ballot

8Section 4 (b)(1) National Voter Registration Act of 1993 (NVRA) stipulates that the NVRA is not applicable for “A State in which, under law that is in effect continuously on and after March 11, 1993, there is no voter registration requirement for any voter in the State with respect to an election for Federal office”.

31 occur more frequently in heavily populated jurisdictions. Provisional voting, in particular, is most likely to occur in heavily populated jurisdictions. In ad- dition, large jurisdictions are also more technologically minded and innovative. This is most likely because voters in large jurisdictions are more demographi- cally diverse, younger, and more mobile.

Gronke, Galanes-Rosenbaum, and Miller (2007) categorize the different types of as (1) in-person early voting (EIP), (2) no-excuse absentee balloting, (3) vote-by-mail.9 The hope is that more voters would be able to participate if the costs of voting were lowered. However, those who are most likely to vote are also most likely to take advantage of early voting systems (Gronke 2014).

Contrary to popular belief, Burden (2014) find that although early voting decreases the direct costs of voting, the indirect costs (i.e. mobilization efforts) result in an overall decrease in turnout. The need to appear at a polling place on Election Day has an important mobilizing effect for voters who are on “the turnout bubble”. Gronke, Galanes-Rosenbaum, and Miller (2007) find that the only effective early voting reform to increase turnout consistently in both Presidential and midterm elections is vote-by-mail.

9EIP began in the 1980s in Texas when voters who did not think they would make it to the polls on election day could go to the county clerk’s or election office to cast their ballot early.

32 2.3.3 Election Administration Elements

In contrast to the costs of voting resulting from the laws regulating elections, there are costs associated with practical aspects of conducting an election. Elections are one of the most complex tasks government is asked to perform (Hall 2018). Modern election administration can be described as a complex “network” composed of public officials, nonprofit organizations, and private actors (Hale and Slaton 2008).

Further complicating matters, the passage of the Help America Vote Act (HAVA) in 2002 drastically changed the election administration environ- ment. In a direct reaction to the 2000 election, Congress passed The Help America Vote Act (HAVA) in 2002 to correct the failings of the American electoral system. More specifically, HAVA “created the Election Assistance Commission (EAC), established a set of election administration requirements, and provided federal funding, but did not supplant state and local control over election administration” (The Help America Vote Act and Election Adminis- tration 2015).

Following the 2000 election, the decade of 2000-2010 saw the most election reform in our nation’s history (Montjoy 2010). Many administrators find it difficult to keep up with financial costs and labor costs associated with the requirements set by HAVA (Moynihan and Silva 2008). Currently, election administrators feel increased pressure to find “quick-fixes” and act as problem solvers due to the environment of increasing distrust in our political system. Since election administration issues have gotten more politicized, local election

33 officials often feel that their environment was mildly contentious but somewhat insulated from partisan politics (Moynihan and Silva 2008).

2.3.3.1 Local Election Officials

Despite increased examination of election administration in recent years, little research has been done on the role of local election officials in American elections. Local election officials (LEOs) are the elected or appointed officials in charge of the conduct of elections within a locality. In general, LEOs’ ad- ministrative tasks can be sorted into nine distinct groups: (1) registration, (2) poll officials, (3) voting equipment, (4) ballots, (5) precincts and voting sites, (6) voting operations, (7) alternate voting (i.e. early, absentee), (8) counting, and (9) audits (Montjoy 2008). Using survey data of election administrators at the county level in 2005 and 2007, Moynihan and Silva (2008) find that LEOs tend to be slightly conservative ideologically, 53 years old, had 11 years of experience, female, and made between $40,000 and $50,000 annually.

Kimball and Baybeck (2013) conducted a survey analysis of LEOs us- ing a stratified random sample based on jurisdiction population in 2009. In general, LEOs in small jurisdictions are more likely to be part-time and have a fairly intimate setting for election administration. LEOs in medium-sized jurisdictions have slightly more responsibility, but still can personally visit all polling places on Election Day and can personally keep in contact with vot- ers. On the other hand, LEOs in large jurisdictions must act more like an “executive in a large organization”. In terms of policy preferences, LEOs in

34 large jurisdictions are more supportive of policies to help manage the “crush of voters on Election Day” (i.e. vote centers, early voting, vote-by-mail, and early voting) and provisional voting, while LEOs in small jurisdictions are more likely to oppose them.

Kimball et al. (2013) examined the attitudes of local election officials directly with data from three national surveys of local election officials in 2005, 2007, and 2009. Partisan differences in attitudes towards elections laws tend to only exist in populous jurisdictions. Partisan politics and negative atti- tudes towards state officials were sensed most by LEOs in areas with difficulty implementing HAVA requirements. Contentious election administration envi- ronments and negative attitudes towards state officials are most likely to be felt by LEOs in battleground states. They also find modest results suggesting that nonpartisan LEOs sense less partisan politics in state election administra- tion and a less contentious election administration environment. Ultimately Kimball et al. (2013, p.567) conclude that LEOs interpret their policy envi- ronment as administratively burdensome due to “an ongoing set of unfamiliar requirements that have made their life more difficult”. In other words, avoiding burdens is more likely to be salient to LEOs than partisanship in formulating policy preferences.

2.3.3.2 Poll Workers

Despite the integral nature of poll workers in American democracy, there has been little research in understanding how they impact the election

35 experience. Using survey data of poll workers from the 2006 primary election in Cuyahoga, Ohio and the Third Congressional District in Utah, Hall, Monson, and Patterson (2007) find that in terms of demographics, poll workers are not a homogeneous group. Consistent with popular belief, poll workers are mostly female. Contrary to popular belief, poll workers were not across the board elderly individuals.

Claassen et al. (2008) argue that voter-poll worker interactions are sim- ilar to commercial service encounters. Much like a salesperson at a retail store, poll workers are an important part in determining the quality of the voting experience. Most importantly, when voters have a positive experience with poll workers, they tend to feel more confident that their vote was counted. These findings are in line with marketing research on customer satisfaction. Marketing scholars Eisingerich and Bell (2008) find evidence suggesting that functional elements such as courteousness of staff and personalized attention are among the most accessible elements in value formation of a service delivery. Likewise, the failure to meet the minimum expectations of civility will lead to negative feeling about their experience (Price, Arnould, and Tierney 1995).

2.3.3.3 Polling Places

Haspel and Knotts (2005) argue that the government has the ability to alter the costs of voting through the geographic locations of polling places. They use administrative and GIS data (purposely avoiding survey data) from the 2001 mayoral race in Atlanta, Georgia. Haspel and Knotts (2005) chose

36 to use data from a single city to maximize internal validity, although they understand there is a trade-off in terms of external validity. Distance was measured as “logged distance” and the interaction “logged distance * vehi- cle availability” due to the notion that one additional block is much less of a burden than one additional mile when walking, but only slightly less of a burden when driving. Haspel and Knotts (2005) find that the availability of a vehicle increases the likelihood of voting by about 20%. They find that when precincts are split, an additional precinct is added, the benefits of closer prox- imity outweighs informational costs associated with a new location. Overall, Haspel and Knotts (2005) find that voters are sensitive to changes in polling place proximity (distance). They conclude that the “C” term in the calculus of voting is indispensable in any turnout calculation.

Probably the most well-known cost of voting associated with election administration is line length. Many speculated that the long lines as a result of the availability of voting machines lost the 2004 presidential election for John Kerry. Franklin County, Ohio, in particular, received national scrutiny in 2004 due to long lines. In a scientific examination of the relationship between avail- able voting machines and registered voters in Franklin County, Ohio, Highton (2006) finds that by increasing the number of registered voters per available voting machine from 250 to 300 and 400, turnout decreases by about 2.6 and 7.7 percentage points respectively.10 Ultimately, Highton (2006) concludes that Kerry would have won a substantially greater number of votes if there

10This effect is non-linear.

37 were more voting machines available in Franklin County, but not enough to close the margin needed to win Ohio.

2.3.3.4 Voting Machines

Voting systems used in federal elections must provide for error correc- tion by voters, manual auditing, accessibility for disabled voters, alternative languages, voter privacy, ballot confidentiality, and error-rate standards. In their survey data of election administrators Moynihan and Silva (2008) find that the most difficult task following the implementation of HAVA is the pro- vision of adequate voting systems for disabled voters.

The choice of voting equipment in some states is a local matter. This means the voting equipment used in many localities is conditional on the pref- erences of the officials charged with purchasing voting equipment. Borrowing cognitive dissonance theory and prospect theory from Kahneman and Tver- sky (1979), Moynihan and Lavertu (2012) find evidence for 1) election officials with faith in technology and currently use direct-recording machines (DREs) are more likely to support DREs over other less technological voting systems, 2) if LEOs are confident in their knowledge of DREs, they dis- proportionately report the benefits. They do not find statistically significant evidence in support of the notion that election officials who use DREs are more likely to discount criticism of DREs. Overall, Moynihan and Lavertu (2012) find evidence for “status quo bias” (like what you have) rather than “confirma- tion bias” (rationalizing what you decided to use) in LEOs’ attitudes towards

38 DREs. The implication is that LEOs may be unwilling to replace problem- atic equipment due to cognitive biases. Likewise, Moynihan and Silva (2008) find that LEOs have a strong preference for the voting system they currently implement. In other words, LEOs have a tendency to commit to the system they have previously chosen to use.

2.3.3.5 Financial Costs

Above all, the least understood aspect of election administration is the costs of elections.Gronke et al. (2008, p.449) remark: “The costs of elections has been referred to as the “holy grail” of election administration research be- cause so little is known about the subject (T. Hall, personal communication)”. There are many non-trivial additional costs (new ballot forms, additional hours worked, rental space, etc.) associated with upgrading voting equipment. These additional costs also vary year-to-year based on market prices. Virtually all LEOs are concerned about rising costs and shrinking budgets (Montjoy 2010).

Although there is a multitude of financial costs associated with con- ducting elections, there is a dearth of academic research on the topic in any discipline. This is concerning given legal principles such as one man one vote. As noted by Norris (2015, p.145), decentralized electoral regimes like in the United States have the problem of non-uniform application of citizens’ voting rights.

“Some poorly resourced local agencies that have suddenly to ramp up efforts to run contests at periodic intervals, may lack the profes-

39 sional experience, permanent personnel, and technical machinery to manage tasks well. Decentralization giving more discretion to local electoral officials also expands the number of entry points and thus the potential risks of corruption and malfeasance” (Nor- ris 2015, p.145).

Of the little scholarly research, there is some evidence that cost-saving measures indeed have an impact on participation. For example, to cut costs administrative units choose to open fewer polling places. This cost saving measure is referred to as polling place consolidation. Unfortunately, McNulty, Dowling, and Ariotti (2009) find a negative effect of polling place consolida- tion on turnout. Moreover, decreases in turnout are most likely the result of increased information costs and transportation costs (Brady and McNulty 2011).

Perhaps most importantly, local election officials may not be in control of the age or state of their equipment due to limited budgets and access to resources. In many cases, a major catastrophe has to occur to provoke budget setting entities to allocate funds. An example of which is Fairfax County, Virginia where in a 2013 recount, officials discovered 2,000 uncounted ballots in an optical-scan voting machine (Vozzella 2013). Although 2,000 forgotten ballots may seem insignificant in the most populous county in Virginia, this discovery flipped the results of the statewide Attorney General race. Mark Herring ultimately won the race for Attorney General by 165 votes. Making the 2013 Attorney General race, the closest statewide race in Virginia state

40 history. This embarrassment led to Fairfax County overhauling their voting equipment in 2014.

In light of these types of incidents, there has been a movement from states to mandate paper trails. This means that some states need upgraded or new equipment to satisfy this requirement regardless of the performance of the current machines. For these reasons, there will continue to be demand for voting equipment. Unfortunately, the federal budget for FY2011 cut federal funding for “election reform grants” due to states choosing to accrue interest on about $1 billion in EAC payments (The Help America Vote Act and Election Administration 2015). This has left many localities with limited options and resources in making necessary improvement for their voting equipment.

Further complicating matters, voting equipment is purchased by private- sector companies by election administrators (The Help America Vote Act and Election Administration 2015). The vendor market has undergone a consoli- dation since 2002 leading to fewer and fewer vendors controlling most of the market. Given the various state and federal guidelines and mandates to audit and provide assistance to disabled voters, there is no viable method to shift to a voting system without components furnished by private vendors. Despite this, there is some evidence that early voting practices are cost effective rel- ative to Election Day voting (Gronke, Galanes-Rosenbaum, and Miller 2007) and shifting to a vote-by-mail only system is more cost-efficient than in-person polling (Montjoy 2010).

41 2.4 Increasing Confidence in American Elections

The second method for increasing turnout in the calculus of voting framework is to increase the “D” term. As previously noted, “D” is classi- cally understood as citizen duty. Alvarez, Hall, and Llewellyn (2008) find a positive relationship between turnout and confidence in votes counted. Those who have more faith in the electoral process to accurately tabulate ballots nationwide are also more likely to report being more likely to vote in the next election. Classically, active citizenship (or the potential of active citizenship) is inextricably linked with political efficacy as fundamental norms underlying a civic culture of participatory democracy (Almond and Verba 2016). Although citizens do not need to be continuously active in politics and habitually vote, they must believe that they can influence government when they feel the need to do so.

Cross-nationally, Norris (2014) also finds a positive relationship be- tween perceptions of electoral integrity and individual-level voting participa- tion.11 The measure of confidence in elections used by Norris (2014) is a general measure of confidence in national election authorities. Given strong evidence in support of the notion that confidence in elections is associated with high turnout, it is not surprising that the United States, known for low turnout rates, has relatively low levels of voter confidence. Relative to other

11Norris (2014) finds evidence in support of perceptions of election integrity strengthening voter turnout. Similarly, higher levels of perceived election malpractices are associated with lower levels of turnout.

42 full-fledged democracies, the United States ranks 35th out of 51 countries sur- veyed in the Perceptions of Electoral Integrity survey in terms of confidence in electoral authorities (Norris, Wynter, and Gromping 2017). Figure 2.5 depicts the United States’ low ranking.

2.4.1 High Quality Elections as a Public Value

As demonstrated above, the internal aspects of “D” and how they in- fluence individual-level decision-making have already been discussed at length throughout the literature. Our understanding of how elections themselves impact the civic culture, on the other hand, is not well-understood. Election administration agencies should be insulated from outside pressures, have func- tional effectiveness, and an impartial administrative structure (Norris 2015). Within the international context, even instances of minor technical irregu- larities can have detrimental long-term effects. Norris (2015, p.158) notes that in countries with histories of conflict and instability such as in Thai- land, Afghanistan, and Nigeria, administrative errors are “far more damaging for perceptions of electoral integrity by sowing seeds of suspicion in deeply divided societies”.

For instance, in Nigeria’s most recent election the polls were postponed one week just hours before voting was scheduled to start.12 It is believed that the postponement significantly lowered turnout due to a lack of an absentee voting system in Nigeria. Election authorities explained that many polling

12The election scheduled for February 16, 2019, was delayed until February 23, 2019.

43 Figure 2.5: Country-Level Ranking of Perceptions of Confidence in Electoral Authorities

Source: Norris, Wynter, and Gromping (2017) Note: Respondents were asked, “Lastly we are interested in your views about organizations. For each one, could you please rate how much confidence you have in those organizations in this country, from a 1 (no confidence at all) to 10 (a great deal of confidence) scale? – confidence in electoral authorities”. 44 places were not properly supplied with voting materials (Bearak 2019). Many suspect that this reasoning was fabricated to cover up improper dealings be- tween the election commission and one of the political parties. The sudden postponement ignited controversy from both political parties regarding the integrity of the election process in Nigeria.

The 2019 Nigerian election highlights how a seemingly benign change in election administration had profound impacts on Nigerian civic culture. Not only was there a substantial and negative impact on participation, but election integrity was also severely undermined. Only time will tell if Nigerians will make arrangements to vote (i.e. take time off work to return to their hometown) in future elections given the likelihood that the election could be delayed. Although the United States does not suffer from the same level of conflict as less developed democracies, recent electoral maladministration in high-profile elections has made the notion of “rigged” elections more than a conspiracy theory.

Drawing from Public Interest Theory, I conceptualize “D” more explic- itly as the combination of individual predispositions towards participation in democracy. I posit that state and local election officials can influence individu- als’ level of “D” by increasing the public value of elections. The value created by election administration agencies is in line with the Public Value Theory scholarship in public administration. Benington (2011, p.2) notes that the conceptualization of the value generated from public services as distinct from private value offers a “clearer conceptual framework” for judging outcomes in

45 the public sector. Moore (2014, p.468) defines public value as “individually held values that focus on the welfare and just treatment of others” and “par- ticular values that are articulated and embraced by a polity working through the (more or less satisfactory) processes of democratic deliberation to guide the use of the collectively owned assets of the democratic state”. Bozeman (2007, p.13) provides an alternative definition of public values as:

“A society’s ’public values’ are those providing normative consen- sus about (a) the rights, benefits, and prerogatives to which citizens should (and should not) be entitled; (b) the obligations of citizens to society, the state, and one another; and (c) the principles on which governments and policies should be based” (Bozeman 2007, p.13).

In a meta-analysis of competing definitions of public value, Rutgers (2015) finds the term public value to be a pantheon concept for which there is “no identifiable singular set of characteristics for all public values”. Despite the lack of a consensus in the Public Value Theory literature in defining what constitutes a public value, the delivery of high-quality elections is arguably a public value in a well-functioning democracy.

Embedded in the civic culture of the American people is the “great ideas of democracy”, articulated by Almond and Verba (2016) as “the freedoms and dignities of the individual” and “the principle of government by consent of the governed”. In their five country analysis of civic culture in the mid- twentieth century, Almond and Verba (2016) explain that the most distinctive

46 characteristic of the United States’ civic culture is the “highly developed and widespread” role of the participant in democracy. Consistent with the rational- activist view of democracy, Americans share a widespread norm that one ought to participate in the political process.13

In contrast to political attitudes, public values are much more stable over time (Bozeman 2007). In terms of elections, Americans universally em- brace the public value of fair elections. Despite this widespread agreement in this belief, there is volatility in public opinion on the best set of policies and practices to be used for achieving this societal goal. The policy of voter iden- tification is a prime example of this concept. In the past decade, the practice of voter identification went from being a noncontroversial policy with little attention to a becoming a staunchly partisan issue that had made its way to the Supreme Court.

The delivery of high-quality voting experiences and fostering feelings of trust in the electoral system are ways in which election administrators cre- ate public value. In other words, a pleasant and enjoyable voting experience translates to value in the mind of the voter. This dissertation operationalizes changes in the public value of election administration through the measure- ment of election experiences (polling place and poll worker evaluations) and confidence in the vote count. I hypothesize that increases in the measures of

13According to the rational-activist view of democracy, the participation of citizens is the best metric for judging democratic success (see Burden 2014). Almond and Verba (2016)) concedes that critics are justified in arguing that this view holds citizens to an unrealistic ideal.

47 public value created by election officials will increase individuals’ decision to participate in future elections. Ultimately, election administration can directly influence the calculus of voting for an individual through the costs associated with election procedures as well as policy outcomes aimed at achieving a public interest in high-quality elections.

48 Chapter 3

The Influence of Voting System Quality on County-Level Turnout

3.1 Introduction

Despite the obvious relevance of the election experience for subsequent democratic participation, the voting and elections literature has overwhelm- ingly focused on voting behavior rather than the actual act of voting itself. Since the controversial 2000 presidential election, there has been an increasing demand for information about improving the conduct of American elections. With only a decade-and-a-half of sustained attention by political scientists, our understanding of election administration has grown greatly. Most notably, re- search has focused on politically salient issues like turnout, residual vote rates, voter identification, and voter suppression. Although these issues are impor- tant and contemporaneous, persistent less visible problems plague the system and attract scant scholarly attention. It takes major election mishaps to gar- ner attention to issues that have been of the utmost concern to local election officials all along, such as creating foolproof ballots. It can be argued that crisis in 2000 could have been avoided if punch card voting machines allowed for longer and bigger ballots.

49 This study hopes to use data sets not fully leveraged to shed light on widespread persistent problems facing election administrators. What we know about these sorts of issues is, for the most part, anecdotal and descriptive. Some research has helped us better understand elements of election adminis- tration, but they are (by and large) from the perspective of the voter. The data used for this analysis is derived from county, municipal, and state con- tracts for the acquisition of voting equipment by election equipment vendors. This study will establish a relationship between turnout and resource allocated to elections. In particular, financial resources spent on voting equipment by county governments.

3.2 Election Administration in the 21st Century

The decade of 2000-2010 saw unprecedented election reform (Montjoy 2010). The most substantial reform was the Help America Vote Act of 2002 (HAVA; P.L. 107-252). In a nutshell, “HAVA created the Election Assistance Commission (EAC), established a set of election administration requirements, and provided federal funding, but did not supplant state and local control over election administration” (The Help America Vote Act and Election Adminis- tration 2015). As a result, the election administrative system has become increasingly complex, leaving the burden on local election administrators to navigate and implement changes (Montjoy 2008).

In a survey of local election officials, Kimball et al. (2013, p.567) report that election officials interpret their policy environment as administratively

50 burdensome due to “an ongoing set of unfamiliar requirements that have made their life more difficult”. Currently, election administrators feel increased pres- sure to find “quick-fixes” and act as problem-solvers. Many administrators find it difficult to keep up with financial and labor costs associated with the require- ments set by HAVA. Recent research shows that there are many non-trivial additional costs (new ballot forms, additional hours worked, rental space, etc.) associated with upgrading voting equipment. Furthermore, these additional costs vary year to year based on market prices. These rising costs and de- creasing budgets are of the utmost concern in an era where administrators have to oversee elections where partisan suspicions are high. As a result, the administrators have become handicapped in their ability to provide high qual- ity elections. Like a straw that breaks the camel’s back, these less visible but persistent problems can become catastrophic.

Our current knowledge of how much localities spend on voting equip- ment is speculative and anecdotal. According to the 2013 Survey of US Local Election Officials, a national survey of local election officials themselves, vot- ing technology is the most daunting issue in upcoming elections. The results presented in Table 3.1 show that about one fourth of local election officials believe that voting technology is the area in most need of significant improve- ment or an upgrade (Stewart, Shaw, and Ansolabehere 2013). Without a sense of the costs of elections, it will become increasingly difficult for localities to take preventative measures before an Election Day catastrophe.

Underlying the concerns of local election officials about the state of

51 Table 3.1: Areas of Election Administration That Are in Significant Need of Improvement or an Upgrade, 2013

Question Text: Looking forward, over the next 5 to 10 years what areas of election administration are in significant need of improvement or an upgrade? [Please choose up to 3] Voting Technology and voting machine capacity ...... 24% Availability of Poll Workers ...... 22% Voter education ...... 18% Training and Management of Poll workers ...... 11% Postal service issues ...... 10% Preparedness for natural disasters or other emergencies ...... 9% Ballot Simplicity and Ballot design ...... 9% Management and processing of Absentee Voting ...... 8% Availability of Polling Places ...... 7% Accessibility for Uniformed and Overseas Voters ...... 6% Management and processing of Provisional ballots ...... 5% Management and processing of Early Voting ...... 5% Quality of Voter Registration Lists and Management of Poll Books 4% Management, Operation, and Design of Polling Places ...... 3% Professional training ...... 3% Staffing of the Election Office on Election Night ...... 2% Keeping Lines to a Minimum ...... 2% Accessibility for people with disabilities or other special needs . . . . 2% Vendor issues ...... 1% Ballot design, signage, and communications for people who do not 1% speak English or with limited English proficiency Other (specify) ...... 5% Don’t know ...... 7% Source: 2013 Survey of Local Election Officials (Stewart, Shaw, and Ansolabehere 2013)

52 their voting equipment is the notion that voting technology has a substantial impact on the quality of American elections. Voting equipment can quite lit- erally be considered the machinery of democracy. Therefore, it is important to understand the relationship between quantity and quality of voting equip- ment on voting behavior. This chapter will examine the impact of differences in resource allocation on turnout. I will use the term “resource allocation” loosely to describe financial resources spent and labor mobilized to maintain or improve a jurisdiction’s voting equipment. Now that HAVA funding is no longer provided to localities at the intended capacity, there is substantial vari- ability in the amount of resources available to localities to spend on voting equipment. In other words, some localities are resorting to austerity while others are free to make necessary purchases on a regular basis.

3.3 Hypotheses

What is of most interest in this study is the degree to which county expenditures on voting equipment impact county-level turnout. Turnout is arguably the most important empirical metric for judging the health of democ- racy (Burden 2014). As a starting point, it is reasonable to investigate the state of voting equipment in the United States as it relates to turnout. The implementation of HAVA was short-sighted in that it forced states and lo- calities to purchase higher quality equipment than they were predisposed to purchasing, but it created barriers for entry of new vendors through rigorous a certification process.

53 Another source of variation is in terms of election services purchased from election services vendors. To date, there are no published studies in any political science journal regarding the impact of vendors on any aspect of the electoral process. Election equipment in the United States is almost exclu- sively purchased from private-sector vendors. When a jurisdiction purchases voting equipment, they are actually purchasing the hardware and software along with a variety of services for the initial implementation and long-term service and support of the system. In other words, not only is voting equip- ment purchased, but so are services provided by the vendors to maintain the equipment. Unlike other industries, customers cannot substitute away from voting equipment when vendors increase their prices. Because voting equip- ment uses proprietary software, jurisdictions also cannot mix and match prod- ucts from different companies. Therefore, firms with large product catalogs are desirable. The resulting voting technology industry is “dysfunctional” and “malformed” (Interview with Greg Miller, Open Source Election Technology Institute). Three companies (Election Systems & Software, Dominion Vot- ing Systems, and Hart Intercivic) serve the vast majority of registered voters across the United States (see Appendix Figure A1).

On March 8, 2010, the Department of Justice and Attorneys General from Arizona, Colorado, Florida, Maine, Maryland, Massachusetts, New Mex- ico, Tennessee, and Washington filed an antitrust lawsuit against Election Systems and Software, INC. (ES&S) for their acquisition of Premier Election Solutions, Inc. and PES Holdings, Inc. (collectively, “Premier”). ES&S and

54 Premier are the two largest and most competitive firms in the industry. The ES&S and Premier are the two largest voting system manufacturers. ES&S, the largest manufacturer, collected a revenue of $149.4 million in 2008. Pre- mier, the second largest, collected a revenue of $88.3 million in 2008. If the acquisition were to go through, ES&S would be the provider of over 70% of the voting equipment systems used in the United States. The Department of Justice and Attorneys General were particularly fearful that “prices for voting equipment systems likely will increase, while the quality and innovation likely will decline, as a consequence of reduced competition in violation of Section 7 of the Clayton Act, 15 U.S.C. Section 18”.1 Ultimately, the Department of Justice required ES&S to divest assets purchased from Premier Election Solutions, Inc.

H1a: Higher turnout rates will be associated with higher levels of resource expenditure on voting equipment in the form of more dollars spent per registered voter. H1b: Higher turnout rates will be associated with the purchase of supplemental vendor services.

Despite the intervention of the Department of Justice, choices in voting technology have dramatically decreased following HAVA. Given this, election officials have not been able to purchase new equipment nor make the necessary upgrades. Unlike many other types of goods, computers are primarily subject

1U.S. et al v. Election Systems and Software, No. 1:10-cv-00380, March 8, 2010.

55 to technological depreciation (Cho 2011). Physical depreciation, on the other hand, is a less prevalent reason for replacement of computers. The financial cost of maintaining legacy computers increases with age. The average working lifespan of computing equipment observed by Cho (2011) from 1989 to 1999 decreased from six to five years due to an acceleration in technological depre- ciation. If we equate the life-span of a voting machine to a typical computer, the voting machines purchased by states through federal funds in early 2000s are long overdue for replacement.

Additionally, when machines are taken out of service for repairs, overall productivity is handicapped. This was the case in the 2016 general election, broken ballot scanners were reported across the New York City, New York. In some precincts all scanning machines were broken. Many were left entrusting their ballots to poll-workers under the promise that they would faithfully scan ballots placed in emergency ballot boxes. Some voters, however, were not con- fident that their ballots would be counted as intended and took their marked ballots out of the polling place (Millman 2016). Even though 61 teams were sent out to resolve scanner issues, some voters voiced their mistrust in the New York City’s election administration. At one precinct in the Bronx a voter shouted, “It’s rigged!” and walked away without casting a ballot (“Long lines, issues reported at polling stations in Parkchester” 2016). New York City en- countered similar problems in the following major election. In 2018, New York City experienced a “citywide paper jam” resulting from aging optical scanners unable to handle the volume of ballots cast due to record high turnout. As a

56 result, some voters waited in line for up to three hours (Neuman and Pager 2018).

It is not just old and deteriorating equipment that can be catastrophic, steep learning curves with new equipment or severely outdated equipment can also result in serious Election Day issues. In the 2016 general election, all voting machines in Wilson County, Tennessee went “down” at the opening of the polls. It was estimated that voting was held up across the county for about 45 minutes during the peak of the morning rush. As a result, voters queued up in lines with waiting times over an hour. Wilson County Elections Administrator Phillip Warren explained, “It was a glitch and (the workers) weren’t trained and we had to retrain them real quick” (Alund, Humbles, and Sawyer 2016). The issue which caused the shutdown was the improper printing of bar codes on ballots used for tabulation.2

H2: Lower turnout rates will be associated with older voting equipment.

In all of these anecdotes about voting equipment issues during major elections, the costs of voting were raised significantly for individuals in those jurisdictions. Moreover, all of my hypotheses examine the impact of the quality of election systems on aggregate-level turnout. By increasing the quality of elections, I posit that the costs of voting (described in Chapter 2) will in turn be less. Alternatively, I posit that when voting systems are lower quality, they result in decreased turnout due to higher costs of voting. Given the

2It should be noted that this was the first time the county used the ExpressVote voting system for a major election (“Wilson Co. Using Device To Assure Voter Confidence” 2016).

57 lack of previous research, findings both in support and in opposition will be substantively informative.

3.4 Turnout in a Hyper-Federalized System

Unlike other aspects of American politics, elections are inherently ge- ographic. Article 1, Section 4, Clause 1 of the U.S. Constitution explicitly grants states the responsibility of running elections. This responsibility how- ever has devolved to the local level. Consequentially, leaving many localities footing the bill of major election expenditures. This study is motivated by the relationship between election expenditures and the quality of elections in the eyes of the voters. In other words, can localities buy high quality elections?

As a result of elections being conducted by the states and more precisely the localities, there is not widespread uniformity in American electoral proce- dure. Theoretically, this lack of uniformity should lead to policy improvement because jurisdictions can act as laboratories of democracy. In reality, there is little sharing of empirical data and scientific research between jurisdictions. This makes sense when we consider that localities (often with little resources) are responsible for the vast majority of election conduct. Those primarily in charge of elections, local election officials, are faced with many institutional and budgetary restrictions with little concrete evidence for what will actually work the best.

Given the nature of the data, it is appropriate to use spatial method- ology to generate inferential statistics. Due to the nature of federalism, where

58 regional sub units of government jointly share authority with the national government, location in the United States determines a great deal about the manner in which citizens are represented. This is consistent with the Madi- son conception of American popular sovereignty in that power is distributed throughout the system. Moreover, Ewald (2009, p.97) argues, “uniformity is actually not a central value of American elections”. Even during the Found- ing, the notion of a decentralized election system was not a controversial topic. Given this, it would be naive to consider measures gauging electoral partici- pation, such as turnout, as taking on uniform values across the country.

The traditional way of calculating turnout is by dividing the total num- ber of voters by the voting age population (Burden and Neiheisel 2013). For the purposes of this study, turnout is calculated by dividing the total number of voters registered on Election Day. Since this study is primarily focused on measuring the impact of election administration variables on facilitating voters in casting their ballots, it is not necessary to include those who are not eligible to vote or do not wish to participate.3

We should expect to find “neighborhood effects” in county-level turnout for two reasons in particular:

1. Not all electoral districts are nested within county boundaries.

2. County boundaries were not constructed (nor redefined) to accommodate

3Since voters must be registered well in advance of Election Day, the exact number of registered voters known on Election Day.

59 sociopolitical communities.

As an example of this phenomenon, Figure 3.1 is a map of the City of Austin, TX. Highly competitive races for Austin city government will drive turnout not only in Travis County (which contains the majority of the city), but also in Bastrop, Hayes, and Williamson counties (which also contain some of the city). As a result, this scenario would lead to a clustering pattern in turnout across the spatial units, yT ravis and yW illiamson.

3.4.1 Spatial Autocorrelation

In political science research, especially research involving elections, there is important geographic variation. With more attention being paid to election administration since the 2000 election, there is an increasing need for political scientists to be aware of how to account for geographic dependencies in the regression analysis. In similar disciplines, such as quantitative geographic, medical, and demographic research, controlling for spatial autocorrelation is also common practice.4

The term spatial refers to “how areal units are arranged on a planar map” (Griffith 1987, p. 10). Autocorrelation occurs when the ordering of observations produces a relationship between pairs of individual observations. Formally, autocorrelation means

4Concerns over spatial autocorrelation is particularly evident in research on communica- ble diseases in medical statistics.

60 Figure 3.1: Boundary Map of Austin, TX

61 hi = f(hj), i 6= j (3.1)

where an individual observation hi is a function of other observations.

There are two types of spatial autocorrelation that researchers should investigate when using spatial data. The first type of spatial autocorrelation is in the dependent variable. The second type is in the regression error term. Substantively, the difference between these two types of spatial autocorrelation relates to the functional form of the spatial processes (Griffith 1987).

When spatial autocorrelation is observed in the dependent variable, it is because the data is organized in such a way that observations’ placement and proximity to one another are non-random. Equation 3.2 presents the functional form of autocorrelation in the dependent variable,

yi = f(y1, y2, ..yn), i 6∈ N (3.2) N = {1, 2, .., n} where yi is a function of the values of other observations of the random vari- able Y at other locations, y1, y2, ..yn. When this is the case, there is clustering of similar (positive spatial autocorrelation) or dissimilar (negative spatial au- tocorrelation) observations.

Odland (1988, p. 53) defines spatial autocorrelation in the error term as instances where “the error at each location depends on the errors at other locations”. This generally occurs when the spatial process generating autocor- relation is caused by some unobserved variable. Consider the linear regression

62 model in Equation 3.3 where i is correlated with j and i 6= j.

yi = α + βxi + i (3.3)

When the errors in one point in space, i, are dependent on another location, j, the errors of the regression model are no longer independent. The model in Equation 3.3 would then be in violation of the Guass-Markov Theorem condition of non-autocorrelation, indicating that the ordinary least squares estimator is a sub-optimal estimator relative to the models outlined in the next section of this report. A statistically significant result would imply that we can reject the null of randomness and independence of observations. It would then be appropriate to consider the dependent variable as being systematically organized across space. In other words, the pattern in the dependent variable observed is unlikely to have occurred if it was truly randomly distributed across space. Global spatial statistics estimate the degree to which the data set is spatially organized in clusters of like-values. The most common statistic for testing spatial autocorrelation in continuous data is Moran’s I.5

5It should be noted that Moran’s I is one of many statistics measuring spatial depen- dency. Moran’s I is a correlation coefficient between observations which are nearest neigh- 1 bors (Moran 1950). Moran’s I is asymptotically normal with an expected value of − (n−1) under the null hypothesis of independently distributed observations (Griffith 1988; Cliff and Ord 1981; Moran 1950; Moran 1948). The theoretical sampling distribution can then be used to generate a confidence interval and a Z-statistic to test whether it is appropriate to reject the null hypothesis of independence of observations. Statistical significance of Moran’s I can also be determined through non-parametric means. Exact p-values of Moran’s I can be obtained from a random permutation test with a permutation distribution composed of n! Moran’s I statistics. Due to large number of permutations necessary for most geographic analyses, it is prudent to use a Monte Carlo approach to approximating the permutation distribution (Anselin 2009).

63 Table 3.2 presents the results of a two-sided Moran’s I test.6 These results indicate that there is indeed spatial autocorrelation present in the data. The results are nearly identical across approaches. Positive statistics across Moran’s I specifications and deviation from the expected value indicate there are instances of clustering of high-high and low-low values among neighboring units. The Lagrange multiplier (LM) test presented in following section will determine the appropriate spatial autoregressive regression model to use on the data. Table 3.2: Moran’s I statistics

Moran’s I E(I) P-value Statistic Parametric approach: Dependent variable 0.743 -0.002 0.000*** Error term 0.070 -0.026 0.001* Monte Carlo approach: Dependent variable 0.767 - 0.001***

3.4.2 Lagrange Multiplier Test for Spatial Dependence

When determining the appropriate model to use for analysis, Anselin et al. (1996) suggest using the LM test for spatial dependence. The LM test is used for testing hypotheses about parameters in the likelihood framework.

6The weights matrix used to generate the Moran’s I statistics uses queen contiguity which is when the weighting scheme includes all neighbors sharing at least one border and all neighbors sharing at least one vertex. This is in contrast to Rook contiguity where only neighbors sharing borders are included in the weighting scheme.

64 More precisely, the LM test tests the hypothesis of a simpler model by max- imizing the log likelihood subject to restrictions. When the LM statistic is large, the null hypothesis of a simpler model should be rejected. Compared to the Wald test and the Likelihood-ratio test, the LM test is the least stringent and most appropriate for testing model specifications (Engle 1980).

Anselin et al. (1996) note that there is no theoretical basis for the assumption of W1 6= W2 in applied research. Since there is no substantial reason for this assumption in terms of this analysis, the LM statistic used for

7 hypothesis testing will be simplified such that W1 = W2 = W .

Model selection in this paper will rely on two LM tests presented in Anselin et al. (1996). Equation 3.4 tests the hypothesis of a spatial lag

(Ho : ρ = 0) in the presence of spatial disturbances. Equation 3.5 tests the hypothesis of spatial disturbances (Ho : λ = 0) in the presence of a spatial lag.

µ˜0W y µ˜0W µ˜ 2 [ ˜ − ˜ ] LM = σ2 σ2 (3.4) ρ ˜ NJρ·β − T

0 0 [ µ˜ W µ˜ − T (NJ˜ )−1 µ˜ W µ˜ ]2 σ˜2 ρ·β σ˜2 LMλ = (3.5) ˜ −1 T [1 − T (NJρ·β)] Both tests allow for the parameter not of interest to be unrestricted.8

7The only caveat to making this simplification is that the null of simultaneously testing for ρ and λ (the general spatial process regression model) cannot be tested using the LM test due to identification issues (Anselin et al. 1996). 8In practice, there is a two-step procedure for implementing LM tests for spatial de- pendence. Depending on the results of the typical LM for spatial autocorrelation in the dependent variable or the error term, it may be necessary to use a more robust LM test. The results of the robust LM test should identify whether the spatial process generating

65 Table 3.3: Lagrange Multiplier Test for Spatial Dependence

Model Lagrange Multiplier Test Autoregressive Lag 0.599 Autoregressive Error 5.660∗

The results presented in Table 3.3 suggest that the spatial autoregres- sive error model is more appropriate than a spatial autoregressive lag model.

3.4.3 Spatial Autoregressive Error Regression Model (SAER)

When the regression error in one location is dependent on the error in another location, it is necessary to use the spatial autoregressive error regres- sion (SAER) model. In the SAER model, the autocorrelated error term, µ is a function of the autocorrelation parameter, λ, a matrix of spatial weights for paired observations, W, the autocorrelated error term of another observation, and an identically and independently distributed error term,  (Odland 1988).

Y = Xβ + µ (3.6) µ = λW µ +  with 2 2  ∼ N(0, σ I ) (3.7) −1 < λ < 1 To find a consistent estimate for β it is necessary to use spatially weighted least squares (sometimes referred to as spatial Cochrane-Orcutt) to estimate autocorrelation is in the dependent variable or the error term. The robust LM test for spatial dependence can be implemented in R using the package “spdep”.

66 λ (Anselin and Rey 2014).

3.5 Data and Variables

The focus of this analysis is the examination of the impact of the cost of voting equipment as well as the impact of purchasing vendor services on turnout and the Election Day experience. When localities purchase vendor services, it is for the most part optional. These services are purchased with the intent to improve efficiency and accuracy in conducting elections. There is, however, no academic research on whether these services are actually pro- ducing better high quality elections. As such, it was necessary to construct a unique data set to measure the cost of voting equipment using data from state and county-level contracts for the acquisition of voting equipment.

Open records requests for county contracts for the acquisition of voting systems to state and local election officials served as the basis of this new data set. An open records request, also known colloquially as a “Freedom of Information Act request” or “FOIA” request, is the process bywhich a citizen may ask to obtain a copy or inspect documents that are considered to be public information, but are not made publicly available.9 Municipal contracts are considered public information (the bidding process, however, is not). County and municipal level contracts for the acquisition of voting

9Although both terms are identical in terms of the type of request, the Freedom of Information Act is a federal law. Governmental transparency laws are referred to by different names depending on the state. For example, in Texas the Texas Public Information Act governs open records requests made to the state and local governments.

67 equipment will provide data on:

1. Cost per voting equipment unit

2. Geographic variability in cost

3. Number of units currently in use

4. Services subcontracted to vendors

Requests were accomplished by mailing individual letters to the custodian of election records in each locality with the states sampled. Each request also contained language specifying that “I am requesting an opportunity to inspect or obtain copies of public records of all county contracts (or invoices) for the acquisition of a voting system since November 2000. I am most interested in the itemized lists of costs and services”.10 Data collected from the contracts, were merged with turnout data from the chief election officer of each state, the Verified Voting Foundation, Inc., U.S. Election Assistance Commission, and demographic data from the United States Census Bureau. This is pos- sible through the inclusion of geodesic place codes like Federal Information Processing Standards (FIPS) code.

The contact information was obtained from localities within California, Connecticut, Delaware, Illinois, Kentucky, Nebraska, New Mexico, Nevada,

10Request also contained language specifying that this research is “purely academic” and “in the public interest”. By making these claims, it is possible that the jurisdiction will grant a waiver of fees.

68 Rhode Island, Texas, Utah, and Vermont. These states were sampled at ran- dom. Requests were then made to all localities responsible for purchasing election equipment within each state.

3.5.1 Independent Variables

Yearly Spending on voting equipment per registered voter. Voting equip- ment is considered any piece of hardware used to facilitate the counting and casting of ballots.11 The total investment in voting equipment is the dollar amount spent on the hardware of most current system of voting equipment in use in 2016 purchased by counties from voting equipment vendors.

To compare two methods of measuring spending on voting systems, I have included a chart in Figure 3.2 of both the total spending per registered voter on voting equipment and yearly spending per registered voter on voting equipment. The variable total spending per registered voter is the amount spent on the system that was in place in 2016. Yearly spending is that number divided by the number of years that system has been in place.

The difference in variables is evident in the case of Delaware. In terms of total spending, Delaware spends quite a bit in terms of total amount spent. On the other hand, Delaware’s yearly spending is closer to the average. This

11Software was not included in the calculation of investment in voting equipment due to the disparities in licensing agreements across vendors. For instance, some vendors offer perpetual licenses, while others may only provide annual licenses. Additionally, in the FOIA request made to counties for voting equipment contracts did not make an explicit request to obtain copies of contracts for software licenses. For many counties, this additional request would have complicated the data gathering task substantially.

69 Figure 3.2: Average Cost of Voting Systems Per Registered Voter in 2016

is because the system was 12 years old in 2016. Over 12 years, Delaware has had the opportunity to invest an average amount over a relatively long period of time in maintaining their system. In contrast, Rhode Island implemented a new system in 2016. Due to the recent purchase, Rhode Island has a com- parable total spending and yearly spending per registered voter compared to Delaware. As the timing of spending is particularly relevant to total spending per registered voter on voting equipment, yearly spending per registered voter on voting equipment is a more appropriate measure of county-level investment.

Vendor Services. Depending on the vendor, counties may elect to pur- chase additional election services to facilitate the planning and conduct of elections. Across most vendors, services for training, Election Day support, ballot production, and project management, are available for purchase.

70 Training. As part of equipment acquisition agreements, vendors offer courses for local election officials and poll workers on how to use the equipment. In general, these courses are in-person with hourly rates.

Voter Outreach. Localities may purchase vendor created materials aimed at increasing the public’s understanding of how to use voting equipment. Examples of materials are posters with instructions, social media campaigns, pocket guides, among others.

Project Management. When localities are unable to independently run elections in their jurisdiction, vendors offer services to help plan and conduct elections on behalf of jurisdictions.

Election Support. In order to make sure that equipment related issues are kept at a minimum and solved in a timely manner, jurisdictions may choose to purchase easily accessible mechanical and technological support directly from the vendor. Usually the time-frame of this type of support is Election Day.

Mode of counting and casting of ballots. There are five possible manners in which ballots may be cast and counted in the United States:

1. Direct-recording electronic voting machine (DRE)

2. Paper-based system using Optical Scanners

3. Both DRE and paper-based systems made available to voters (Mixed)

4. Hand-counted paper ballots

71 5. Vote-by-Mail

The model includes a sole dummy variable labeled “Paper-based” to indicate counties using Paper-based system using Optical Scanners and Ballot Marking Devices. The choice to include only one indicator in the model is due to identification issues with state-level variables in voting equipment.12

Age of voting system. The age of a voting system is determined from the date on the first county contract for the acquisition of voting equipment of the model currently in use in 2016. For coding purposes, the start of a voting system’s lifespan would commence following the first June (the start of the annual election cycle) the county was in possession of the equipment. For instance, if a county purchased equipment in September 2006, then the equipment was coded as nine years old in 2016. It should be noted that many counties made subsequent minor purchases to supplement and replace devices to meet state and federal guidelines.

Average number of machines per registered voter. The total number of voting machines is the sum of either all DRE machines or Ballot Mark- ing Devices depending on the county’s chosen mode of ADA compliance.13 The total number of voting machines was provided by the U.S. Assistance Commission’s 2016 Election Administration and Voting Survey (EAVS). The

12The vendor Hart does not offer a paper-based system with ADA compliant ballot mark- ing devices. 13HAVA requires all counties to have ADA compliant voting equipment available to voters with disabilities.

72 number of machines is divided by the number of registered voters as a manner of standardization across counties.

Average number of machines per precinct. The total number of ma- chines is then divided by the total number of precincts in each county. The building block of all electoral districts is the precinct. In every precinct, all voters receive the same ballot. Alternatively, all voters in a precinct vote for all the same offices. This number is an approximation of the number of ma- chines the average voter will see in the polling place. The number of machines is divided by the number of precincts in each county.

Demographic variables. In addition to variables measuring elections, demographic controls are included in the model. County level demographic values were obtained from the 2016 American Community Survey 5-year esti- mates data compiled by the United States Census Bureau.

State Fixed-Effects. Regardless of whether the state or county is re- sponsible for the acquisition of voting equipment, there are unobserved state- level variables that determine expenditures (i.e. regional vendor market, tax revenue structure, state laws and guidelines).

3.6 Results 3.6.1 Descriptive Data Analysis

Table 3.4 presents the state averages of county-level variables related to the acquisition and investment in voting equipment. The statistics which are unique to this study are the average equipment investment per registered voter

73 and the average price per voting equipment unit. Both statistics illustrate the high cost of acquiring and maintaining voting equipment in the United States.

What is most notable about these results is the high cost of voting equipment. This statistic is calculated by dividing the total investment in voting equipment by the number of units used in the 2016 election as reported in the EAVS 2016 survey.

Table 3.4: State Averages of Voting Equipment Variables

State Age of Equip. Machines Per Machines Per Equip. Investment Price of Reg. Voter Precinct Per Reg. Voter 1 Unit CA 9.063 0.001 0.732 $15.33 $17,383.24 DE 10.000 0.002 3.138 $ 4.68 $ 2,254.72 IL 11.176 0.001 1.031 $ 4.35 $ 5,364.19 NE 11.000 0.002 0.797 $17.65 $ 9,498.45 NM 2.000 0.002 0.953 $15.30 $ 4,995.99 NV 12.000 0.007 3.184 $17.16 $ 2,523.95 RI 10.000 0.001 1.000 $ 4.86 $ 3,984.08 TX 10.037 0.005 2.912 $19.79 $ 5,378.43 UT 10.000 0.006 2.748 $26.26 $ 4,526.49 VT 14.286 0.004 2.143 $ 3.01 $ 6,355.73 Overall 9.956 0.004 2.143 $17.84 $ 6,781.65

3.6.2 Spatial Regression Analysis

Table 3.5 presents the results of spatially autocorrelated error model of county turnout in 2016. These results are supportive of the central hypothesis of higher levels of investment in voting equipment is associated with higher levels of turnout. The statistical significance of Yearly Spending Per Regis-

74 tered Voter means that it is probable that a true relationship exists between spending and turnout, all else being equal. Compared to counties with similar characteristics, those that spent relatively more on election systems had, on average, higher turnout. Most specifically, the results of this regression suggest that a $1 increase in yearly spending per registered voter is associated with a 0.347% increase in turnout.

Second, in addition to spending on equipment, an important part of election spending is vendor services. These are usually at the discretion of the locality. Among the possible election services offered by vendors, the coeffi- cient for Training is positive statistically significant. This signals that counties which have purchased training programs from election services vendors have, on average, higher turnout. The choice to purchase training programs is cor- related with a 2% increase in turnout. Counties that have made purchases of Project Management, Election Day Support, and Voter Education do not exhibit statistically higher turnout rates than counties that did not purchase those services.

Compared to counties using electronic systems and mixed systems, counties with paper voting systems saw a 1.383% higher rate of turnout. It could be that more voters can be accommodated at one time in paper systems. This would lower costs of voting in terms of time spent voting. Voting booths are fairly inexpensive, so there can be many in one location. I consider this finding noteworthy, but not a primary finding because this analysis was not designed with the primary objective of answering questions related to voting

75 Table 3.5: Spatially Autocorrelated Error Regression Model for County-Level Turnout in 2016

Variables Coef. SE (Intercept) 52.334∗∗∗ (3.889) Election Administration: Reg. Voters/Precinct −0.0003∗ (0.0001) Paper-based System 1.383∗ (0.556) Yr Spend. Equip./RV 0.347∗∗ (0.113) System/Age 0.553∗∗∗ (0.148) Vendor Services: Training 2.082∗ (1.012) Project Management −1.021 (0.822) Election Day Support −0.869 (0.544) Voter Education −0.295 (0.700) Metro Classification: Large Fringe Metro −3.222∗ (1.595) Medium Metro −3.550∗ (1.737) Small Metro −2.247 (1.769) Micropolitan −3.469∗ (1.687) Noncore −2.329 (1.727) Demographics: % College 0.551∗∗ (0.184) Median Age 0.068 (0.085) % White 0.122∗∗ (0.039) % Hispanic −0.034 (0.034) Per Capita Income 0.0002 (0.0001) State Fixed Effects: Connecticut −5.897∗ (2.801) Delaware −10.911∗∗∗ (1.858) Illinois −13.092∗∗∗ (1.602) Kentucky −19.389∗∗∗ (1.583) Nebraska −51.112∗∗∗ (1.761) Nevada −38.808∗∗∗ (3.636) New Mexico −3.096 (2.067) Rhode Island −11.755∗∗∗ (2.187) Texas −15.898∗∗∗ (1.008) Utah 2.648 (2.215) Vermont −18.236∗∗∗ (2.724) 76 Lambda: 0.177, LR test statistic: 6.703∗∗ ∗p<0.05; ∗∗p<0.01; ∗∗∗p<0.001 system type.

Surprisingly, older voting systems appear to be correlated with higher turnout. This could be that both poll workers and voters are more familiar with the system and better able to troubleshoot when difficulties arise. As long as localities are properly maintaining their system, it is not unreasonable to assume that familiarity could work towards lowering the costs of voting. I would also like to note that older systems (i.e. original HAVA purchases) are not “worse” than newer systems just by virtue of being old. As there little innovation in machine design and functionality throughout the voting equip- ment industry due to lengthy ceritifcation processes at the state and national levels, the major concerns with older systems is security, not necessarily func- tionality. If counties are consistently putting resources into maintaining their equipment, things like machine breakdowns can be properly taken care of.

3.7 Conclusion and Discussion

Using county-level turnout data, the results presented in Table 3.5 suggest a positive relationship between spending on voting equipment and turnout. Substantively, these results are non-trivial. Counties that invest more resources into elections appear to have higher levels of turnout than those who invest less, all else being equal. It should be noted that these re- sults suggest a causal relationship. Unfortunately, this results cannot confirm this conclusion. Analogous to the age-old question of the chicken or the egg, this analysis cannot detect whether increasing resources invested in elections

77 is the definitive causal mechanism explaining increases turnout.

Although, the age of voting equipment also seems to influence turnout in a positive direction, these results should not be used to cast aside worries of those who believe that voting equipment in the United States is woefully out- dated. At the end of the day, voting machines are like any other piece of com- puterized technology. Unlike punch-card and lever voting systems which are mechanical in nature, electronic voting technology (DRE and optical scanning systems) cannot last for decades-upon-decades. Despite the flaws of mechani- cal systems, they were long-lasting and did not require proprietary knowledge in their maintenance. If the future of voting technologies lies in computer- ized systems, it is prudent for localities to prepare to update these systems at regular and short intervals.

The empirical evidence in support county purchases of training should be reassuring for local election officials. These results suggest that when poll workers and elections officials are able to effectively solve problems and set up the equipment properly, polling places will run more smoothly and efficiently. In areas with heavy traffic, the ability to keep the maximum number of ma- chines in operation can have an effect on line length. This finding suggests that counties might be able to increase turnout by a non-trivial amount by purchasing programs aimed at educating poll-workers and election officials in the mechanics of properly casting a ballot with their specific system.

Methodologically, the results of this study suggest that modeling spatial data appropriately does have substantive implications on regression analysis.

78 By including specifications for spatial autocorrelation in the error term the fit of the model was improved. Considering the recent deluge of publicly accessible big data produced by governmental entities, it is imperative for researchers to understand how to recognize and model spatial data. Political scientists studying local government, in particular, should be cognizant of the difficulties in dealing with spatial data. As there are roughly 3,000 counties in the United States, there exists the possibility of a complex scheme of spatial dependencies that must be taken into account in any county-level analysis.

79 Chapter 4

How Election Services Vendors Influence the Voting Experience

4.1 Introduction

Voting equipment in the United States is a $300 million industry dom- inated by three private equity-backed companies (Melin and Pickert 2018). Roughly 87% of registered voters will vote on systems provided by Election Systems & Software (ES&S), Dominion Voting Systems, or Hart InterCivic (see Figure A1). ES&S alone services more than half of all registered voters. To many, including the United States Justice Department, it is troubling that less than a handful of companies compete for the opportunity to work with localities in conducting elections.1

Given some well published recent election catastrophes and investiga- tive journalism, some election vendors have become household names. The most infamous is Diebold Election Systems, Inc. (“Diebold”) due to its then- chairman of the board and chief executiveˆaspersonal commitment to “helping

1Following the acquisition of Premier Voting Solutions (“Diebold”) by ES&S in 2009, the United States Justice Department and nine state attorneys general filed an anti-trust complaint in the District Court for the District of Columbia against ES&S. Premier Voting Solutions was ultimately acquired by Dominion Voting Systems in 2010.

80 Ohio deliver its electoral votes to the President” (Fitrakis and Wasserman 2004). This comment along with a history of fundraising for the Republican Party, fed distrust of DRE machines by the public. In an attempt to improve the companyˆasimage in the eyes of voters, Diebold Election Systems even changed its name and removed its logos from its machines following the 2004 election and California’s decertification of its touchscreen voting system.

Further damaging the credibility of Diebold was the report of com- puter scientists Avi Rubin and Dan Wallach on the leaked propriety code of the Diebold AccuVote-TS (touch screen voting machine). Ultimately, Rubin and Wallach deemed the code “unacceptable” for an American voting system (Wofford 2016). This scandal highlighted the need for people assume that voting machines are susceptible to the same security concerns as more familiar technology (Deutsch 2005).

4.1.1 Election Services Vendors

One of the least understood aspects of American elections is the role of election services vendors. Election services vendors are responsible for pro- viding voting equipment as well as the provision of vital election services to states and localities. Despite the importance of election services vendors in the conduct of American elections, there is little scholarly research about the im- pact of election services vendors on the quality of American electoral process. Unique to this study is the investigation the impact of four broad categories of vendor service packages (election support, project management, training, and

81 voter outreach programs) on individual-level election performance indicators of the voting experience and confidence in elections.

Central to this study is the question of whether election services vendors (and the services they provide) help states and localities conduct elections high quality elections? The results of regression analyses on categorical indicators of election performance and voter confidence on cross-sectional survey data from the 2016 Survey of the Performance of American Elections merged with county-level contract data for the acquisition of voting systems, present little evidence supporting the notion that services from the election services vendors generate more positive election experiences for in-person voters. However, these results do not present evidence in favor of an alternative conclusion that vendors services are detrimental. Depending on the availability of resources, the choice to purchase services from the private sector may be a convenient alternative to relying completely on the public sector.

Election services vendors fall into two broad categories: election equip- ment manufacturers and value added resellers. The majority of localities in the United States purchase equipment and services from the former, while some value added resellers have regional popularity.

When a jurisdiction purchases voting equipment, they are actually pur- chasing the hardware and software along with a variety of services for the ini- tial implementation and long-term service and support of the system. In other words, not only is voting equipment purchased, but so are services provided by the vendors to maintain the equipment. Unlike other industries, local elec-

82 tion officials are not usually able to mix-and-match products from different manufacturers as the unit for certification is the voting system as a whole.2

Moreover, voting equipment in the United States use proprietary soft- ware. Proprietary software systems are systems that do not allow users to share or change software (Lerner and Tirole 2002). One side effect of adopting a proprietary system is that localities become locked into annual licensing fees in addition to the start-up costs (Caulkins et al. 2013, see also Lerner and Tirole 2002).

Empirical evidence in support of the importance of election services vendors in the conduct of elections at the county level can be found in Texas. The Texas Annual Voting Systems Report (AVSR) provided by the Secretary of State of Texas reports expenditures by localities on expenditures to election services vendors. The AVSR is a state mandated administrative survey con- ducted by the Texas Secretary of State to collect information on the cost of elections and the quantity of voting equipment across counties.3 What makes the AVSR survey particularly useful is its data on sub-categories of election expenditures (license and vendor service fees, non-vendor expenditures, and overall expenditures).

In the fourth column of Table 4.1, software and licensing fees on average

2In the rare instances of localities using products from different vendors, they are for vote-by-phone systems for ADA accessible voters and systems using independent tabulation systems for absentee and in-person voting. 3The Annual Voting Systems Report is mandated under Chapter 123, Subchapter C of the Texas Election Code.

83 Table 4.1: Average Spending on Vendor Licenses and Services by County in Texas, 2016

Average Spending on Average of % of Urban-Rural Average Total Spent Vendor Licenses and Total Spent on Vendor N Classification in 2016 Services Licenses and Services Large Central Metro $ 521,406.07 $ 2,118,832.35 29.39% 6 84 Large Fringe Metro $ 57,978.89 $ 438,391.26 37.57% 28 Medium Metro $ 54,907.27 $ 191,653.06 36.31% 24 Small Metro $ 30,579.16 $ 79,876.90 40.56% 21 Micropolitan $ 14,498.94 $ 187,512.51 31.77% 44 Non-core $ 8,851.34 $ 35,856.01 36.26% 125 Overall $ 34,856.66 $ 181,984.19 35.81% 248 Source: Texas Annual Voting Systems Report, 2017 Table 4.2: One-Way Analysis of Variance Results for Percent Vendor Licenses and Services in Accordance with Urban-Rural Classification

Source of SS df MS F P-value Variation Between Groups 0.153845 5 0.030769 1.075782 0.374504 Within Groups 6.578348 230 0.028602 Total 6.732193 235 Source: Texas Annual Voting Systems Report, 2017 make up over a third of the total expenditures on election administration. In terms of the percent of total expenditures spent on vendor licenses and services, there does not appear be a statistically significant differences between county urban-rural categories (Table 4.2). Regardless of urbanization level, roughly the same amount of county expenditures for election administration in the 2016 election were spent on vendor election services and software licenses. This illustrates a shared dependence on private sector election services vendors across localities.

What contributes to the price of licensing fees is the fact that voting systems used in federal elections must provide for error correction by voters, manual auditing, accessibility for disabled voters, alternative languages, voter privacy, ballot confidentiality, and error-rate standards. Election services ven- dors create a product that is not easily duplicated. Complicating matters further, some states require software used for voting equipment to be tested in accredited laboratories at the expense of the vendors.

85 4.1.2 Services

County contracts for the acquisition of voting equipment specify the terms of the agreement with election services vendors. Depending on the ven- dor, counties may elect to purchase additional election services to facilitate the planning and conduct of elections. Across most vendors, services for training, Election Day support, ballot production, and project management, are avail- able for purchase. It should be noted that the services provided by vendors are not exclusive to the private sector. Each of these can be provided in-house or by non-profit organizations such as the Election Center (National Association of Election Officials).

Election Day Support. In order to make sure that equipment related issues are kept at a minimum and solved in a timely manner, jurisdictions may choose to purchase easily accessible mechanical and technological support directly from the vendor. Election support can take the form of technician on- site or through devoted communication channels (i.e. telephone).

Voter Outreach. Localities may purchase vendor created materials aimed at increasing the public’s understanding of how to use voting equipment. Examples of materials are posters with instructions, social media campaigns, online demonstrations, and pocket guides.

Project Management. When localities are unable to independently run elections in their jurisdiction, vendors offer services to help plan and conduct elections on behalf of jurisdictions. Specifically, project managers are ap-

86 pointed to help with ballot programming, logic and accuracy testing, training, and equipment administration.

Training. As part of equipment acquisition agreements, vendors offer courses for local election officials and poll workers on how to set-up the equip- ment, use the equipment, and troubleshoot potential problems. The goal of training programs is to educate election personnel to the extent that they do not need continuous support from the vendor. In general, these courses are in-person with hourly rates.

There is some research in political science that has examined election service vendor training programs. Using survey data of poll workers from the 2006 primary election in Cuyahoga, Ohio and the Third Congressional District in Utah, Hall, Monson, and Patterson (2007) find that Utahˆashands- on training with the help of the vendor and voluntary additional sessions better prepared poll workers than the classroom style training offered by Ohio. Overall, positive training experiences translated into more confidence in job performance. Poll workers who also felt confident in their training were also less likely to encounter problems on Election Day.

4.2 Theoretical Framework

The relationship motivating this analysis is that of election services ven- dors and state and local election administration agencies. This relationship has many parallels to a supplier-customer relationship. By framing the elec- tion services vendor-election administrative agency relationship as a supplier-

87 customer relationship, we can better understand how election services vendors create value for state and local election administration agencies. As previ- ously mentioned, election services vendors provide a variety of services (Voter Outreach, Training, Election Support, and Project Management) aimed at improving state and local election officials ability to provide high quality elec- tions.

An important wrinkle is understanding the dynamics of this relation- ship is conceptualizing the election services vendors (supplier) as having adopted a service business logic. According to Gr¨onroos and Ravald (2011, p.241) “a service logic means that a supplier does not provide resources for the cus- tomers’ use only, but instead it provides support to its customers’ business processes through value-supporting ways of assisting the customers’ practices relevant to their business”. In terms of value creation, firms that adopt a service business logic facilitate the value-generating process of their customers (Gr¨onroos 2008).

A classic example of a firm that adopts a service business logic in the manufacturing industry is a call center. Call centers facilitate customers’ effi- cient provision of information to users. Given the service business logic frame- work described by Gr¨onroos and Ravald (2011) and Gr¨onroos (2008), the value created by call centers is a function of its ability to increase the operational ef- ficiency of its customers’ business practices. In this case, the call center is also a co-creator of value because it serves the customer’s business by extending the service offerings to include informational resources for users.

88 What makes the service business logic lens useful for understanding the relationship between election services vendors and election administration agencies is its holistic approach to understanding value creation. Gr¨onroos (2008, p.303) articulates value created by service businesses as whether their customers “are or feel better off than before” following the use of the supplied services. This definition is intentionally vague because the concept of “value” is difficult to quantify. Even in financial terms, the concept of value can mean a multitude of metrics (i.e. profits, cost-saving, market share). Value can also be measured using attitudinal metrics such as satisfaction, trust, or ease of use. Given the loosely defined nature of “value”, this study extends its meaning to include public value.

4.2.1 High Quality Elections as a Public Value

Since election administration is a function of the public-sector, the value created by election administration agencies is in line with the Public Value Theory scholarship in public administration. Benington (2011, p.2) notes that the conceptualization of the value generated from public services as distinct from private value offers a “clearer conceptual framework” for judging out- comes in the public-sector. Moore (2014, p.468) defines public value as “in- dividually held values that focus on the welfare and just treatment of others” and “particular values that are articulated and embraced by a polity working through the (more or less satisfactory) processes of democratic deliberation to guide the use of the collectively owned assets of the democratic state”.

89 Bozeman (2007, p.13) provides an alternative definition of public values as:

”A societyˆas“public values” are those providing normative consen- sus about (a) the rights, benefits, and prerogatives to which citizens should (and should not) be entitled; (b) the obligations of citizens to society, the state, and one another; and (c) the principles on which governments and policies should be based” (Bozeman 2007, p.13).

In a meta-analysis of competing definitions of public value, Rutgers (2015) finds the term public value to be a pantheon concept for which there is “no identifiable singular set of characteristics for all public values”. Despite the lack of a consensus in the Public Value Theory literature in defining what constitutes a public value, the delivery of high quality elections is arguably a public value in a well-functioning democracy.

This study posits that election services vendors facilitate state and local election administration agencies in the creation of public value in the form of high quality elections. The delivery of high quality voting experiences and fostering feelings of trust in the electoral system are ways in which election administrators create public value. In other words, a pleasant and enjoyable voting experience translates to value in the mind of the voter.

90 4.3 Data and Method 4.3.1 Dependent Variables

When it comes to measuring public value, attention is generally fo- cused on performance, service, quality, real outcomes, and trust (Rhodes R. A. W. 2007). Given that the study of election administration is still quite underdeveloped, the scholarship has not yet yielded a single metric or set of metrics for judging election administration scholarship for evaluating success in election administration. Instead, this study examines four commonly used metrics (wait time in line, polling place performance, poll-worker performance, and voter confidence) for assessing performance of election administration in the United States.

The distributions and coding schemes of these variables is presented in Table 4.3. Due to quasi-complete separation of the data in sparsely populated categories, the levels of the dependent variable expressing the most negative ratings were collapsed.

4.3.1.1 Polling Place and Poll Worker Performance

Integral to the Election Day experience are poll-workers. Interactions between poll workers and voters at the polling place have an independent effect (from overall trust in government) on confidence in elections. Hall, Monson, and Patterson (2009) argue that poll workers should be thought of as street- level bureaucrats due to the amount of discretion they have in the polling place. In their examination voter-poll worker interaction, higher evaluations of poll

91 Table 4.3: Frequency Distributions of Dependent Variables: Polling Place Performance, Poll Worker Per- formance, Line Wait Time, and Confidence in Vote Counted as Cast.

Variable Item Text Levels % N Polling Place How well were things run at 1. Very well- I did not see any problems at the polling 84.33% 1345 place. Performance the polling place where you 2. Okay- I saw some minor problems, but nothing that 13.92% 222 voted? interfered with people voting. 3. Not well- I saw some minor problems that affected the 1.32% 21 ability of a few people to vote. 4. Terrible- I saw some major problems that affected the 0.44% 7 ability of many people to vote. Poll-Worker Please rate the job 1. Excellent 73.56% 1177 Performance performance of the poll 2. Good 24.06% 385 92 workers at the polling place 3. Fair 2.19% 35 where you voted. 4. Poor 0.20% 3 Wait Time in Approximately, how long 1. Not at all 44.60% 741 did Line you have to wait in line to 2. Less than 10 minutes 36.04% 577 vote? 3. 10-30 minutes 13.37% 214 4. 31minutes- 1 hour 4.62% 74 5. More than an hour 1.37% 22 Confidence in How confident are you that 1. Very confident 74.22% 1163 Vote Counted your vote in the General 2. Somewhat confident 21.76% 341 as Cast Election was counted as you 3. Not too confident 2.74% 43 intended? 4. Not at all confident 1.28% 20 Note: Italicized levels were collapsed into the adjacent level. Source: Survey of the Performance of American Elections, 2016 workers is a statistically significant predictor of higher overall confidence in the electoral process. In fact, Gronke (2014) finds poll-worker performance to be the most influential predictor in modeling voter confidence.

In a survey of quantitative election performance metrics, Alvarez, Atke- son, and Hall (2013) notes the importance of gauging voter perceptions of poll-workers because these items tend to capture an overall summary of voter experiences. This chapter will examine a direct assessment of poll-worker per- formance as well as another metric for judging the voting experience, Polling Place Performance. Polling Place Performance is more accurately described as a more global indicator of “polling place operations” (Stein and Vonnahme 2014).4 Distinct from poll-worker performance ratings, polling place perfor- mance can be a function of other aspects of the voting experience such as lines, polling place location, and time of day.

4.3.1.2 Lines

As lines are a given for a substantial number of voters year-after-year (Stewart 2013b), wait time in line is included as part of evaluations of the vot- ing experience. It is generally though that “long lines at polling stations are a visible indication that something is wrong” (Spencer and Markovits 2010, p.3). Levitt (2013, p.470) equates long lines at polling places as a “national embarrassment...We should expect that a baseline attribute of responsible gov-

4Stein and Vonnahme (2014) note that Polling Place Performance is not just a function of line length.

93 ernment is the capacity to accommodate its own public’s desire to participate in its foundational constituent moment”. Unfortunately, our understanding of what causes long lines is mostly anecdotal (Stewart 2013b).

To rectify this gap in our understanding, election surveys such as the Survey of the Performance of American Elections (SPAE) and the Coopera- tive Congressional Election Study (CCES) have included survey items gauging polling place line length. As it cannot be expected for voters know their exact wait time or remember the exact number of minutes they waited in line when they are polled, the most common measurement strategy is to ask voters to report the interval of within which they estimate their wait time. The levels of the variable used in this study, Wait Time in Line, is described in Table 4.3.

4.3.1.3 Confidence Vote Counted as Cast

The final dependent variable examined in this study is voter confidence, more specifically “confidence in your vote counted as cast”. Atkeson and Saunders (2007) advocate for the use of a measure along the lines of “whether a voter believes her vote will actually be counted as intended” for measuring confidence in the electoral process, generally referred to as “voter confidence” (Alvarez, Atkeson, and Hall 2013).

A number of scholars have stressed the importance of making the con- ceptual distinction between confidence in the electoral process and overall trust in government (Atkeson and Saunders 2007). Confidence in the electoral pro-

94 cess focuses on procedures of democracy (Atkeson and Saunders 2007). Hol- bert, LaMarre, and Landreville (2009, p.156) define electoral procedural fair- ness as “the degree to which you feel your own vote was counted accurately in an election”. Although there is a broad consensus among scholars that trust in government does not have a direct impact on electoral participation (Citrin 1974, Rosenstone and Hansen 1993, Hetherington 1998, Hetherington 1999, see also Alvarez, Hall, and Llewellyn 2008), there does appear to be a connec- tion between confidence in elections and electoral participation. Alvarez, Hall, and Llewellyn (2008) find a positive relationship between turnout and confi- dence in votes counted. Those who have more faith in the electoral process to accurately tabulate ballots nationwide are also more likely to report being more likely to vote in the next election.

4.3.2 2016 Survey of the Performance and American Elections

Individual-level survey data on perceptions of the voting experience comes from the 2016 Survey of the Performance of American Elections (SPAE) (Stewart 2017b). This survey collects data from a national sample composed of 200 respondents in each state. The survey was subset to include only respon- dents who reported to have cast a ballot in-person. In-person voting includes both voters who cast ballots on Election Day and during early voting periods.

Variables from the SPAE included in the regression models are several individual-level demographic and political variables which have been shown to correlate with perceptions of the voting experience such as income (Herrnson et

95 al. 2008; Atkeson and Saunders 2007), partisan identification (Sinclair, Smith, and Tucker 2018; Atkeson and Saunders 2007), age (Herrnson et al. 2008), and gender (Herrnson et al. 2008).

In addition to individual-level covariates, variables at the county level were included. With some exceptions, the unit of analysis for election admin- istration in the United States is the county.5 As a result, much of the variation which interests political scientists takes place at the county-level. More specif- ically, variation in election practices, voting equipment, and vendor services takes place at the county-level. It is therefore prudent to include a battery of election administrative variables at the county-level alongside individual-level covariates.

4.3.3 County Contracts for the Acquisition of a Voting System

Both county contracts and data from the AVSR were obtained from open records requests made to state and local public information officers. An open records request, also known colloquially as a “Freedom of Information Act request, or a “FOIA” request, is the process by which a citizen may ask to obtain a copy or inspect documents that are considered to be public informa- tion, but are not made publicly available.6 Municipal contracts are considered

5States that do not have county as unit such as Vermont, Connecticut, Massachusetts, and Wisconsin conduct elections at the sub-county level. 6Although both terms are identical in terms of the type of request, the Freedom of Information Act is a federal law. Governmental transparency laws are referred to by different names depending on the state. For example, in Texas the Texas Public Information Act governs open records requests made to the state and local governments.

96 public information (the bidding process, however, is not). County and munic- ipal level contracts for the acquisition of voting equipment will provide data on services subcontracted to vendors.

County-level election expenses data collected from the contracts was then merged with survey data from the 2016 SPAE. This is possible through the inclusion of geodesic place codes like the Federal Information Processing Standards (FIPS) code.

Vendor services are coded as a “1” for a purchase of a particular vendor service (in any interaction) of the current contract. It is important to note that the coding for vendor services is reliant on the information provided to the researcher via open records requests for “the opportunity to obtain copies of public records of all county contracts for the acquisition of a voting system since November 2000” as well as a specific request for information regarding “any vendor services (if applicable)”. Despite the instructions to county and state public information officers for contract information on vendor services, the data was not always provided due to the destruction of old documents, non-itemized receipts, flat fees, and administrative burden of including every possible document relating to the request. All county public information offi- cers provided as much information as could reasonably be expected to give a global picture of their voting system to the researcher. Given this limitation, vendors services were coded as “purchased” if they were purchased at least once during the most current contract period.

97 4.3.4 Model Estimation

It is generally assumed that the true performance rating observed by

∗ voters, yi , is unobserved. What is observed, however, is a crude categorical rating of performance on a three-point scale, yi. My observations are related to the latent variable as follows:

∗ yi = 1 if 0 ≤ yi < α1

∗ yi = 2 if α1 ≤ yi < α2 (4.1)

∗ yi = 3 if α2 ≤ yi < ∞ The most natural starting point for modeling a discrete ordered variable is the proportional odds regression model (Agresti 2012, 2013, 2018; Long 1997; O’Connell and Liu 2011; Powers and Xie 2000). The proportional odds model is used to estimate the cumulative probability of being at or below a particular level of a response variable (P [Y ≤ ym]/P [Y > ym]). The proportional odds model is nonlinear in probability, but linear in log odds (Fullerton 2009). The proportional odds model assumes the log odds of a given outcome as:

h P (yi ≤ ym|x) i logit[P (yi ≤ m)] = log m = 1, .., M − 1 (4.2) 1 − P (yi ≤ ym|x) 0 = αm − Xi β

Where M is the number of levels in the dependent variable, X is a matrix of independent variables, α is a cut point, β is a vector of logit coefficients, and i being the observation. In this model, β, the parameter representing the

98 vector of logit coefficients does not vary across equations. The cut point, α, is the only parameter varies changes across logit equations. The log odds can be transformed into probability:

0 h exp(αm − Xiβ) i P (yi ≤ m) = 0 (4.3) 1 + exp(αm − Xiβ)

This model, however, is not appropriate when the “proportional odds” assumption (sometimes referred to as the “parallel slopes” or “parallel lines” assumption) is taken into account. Under this assumption, the effect of an in- dependent variable will be the same regardless of where the cutpoint is made. Although the proportional odds model is an elegant and a straightforward way to model associations between ordinal variables, the proportional odds assumption is rarely met. This study will evaluate the probability of model violation of the proportional odds assumption using likelihood ratio tests com- paring the log-likelihood from the proportional odds model the log-likelihood from a model in which the covariate in question has non-proportional odds (Peterson and Harrell 1990).

An option for dealing with violations of the proportional odds assump- tion is the use of more flexible model, the partial-proportional odds model. This model is more flexible in that it allows for non-proportional odds for a subset of variables (Peterson and Harrell 1990).7 The partial proportional

7The non-proportional odds model described in this study is the unconstrained version described by Peterson and Harrell (1990, p.209).

99 odds model is defined as:

0 0 logit[P (y ≤ m)] = αm − Xiβ − T jγm, m = 1, .., M − 1; j = 1, .., N (4.4)

In contrast, the proportional odds model defined in Equation 4.2, the

0 partial proportional odds model in Equation 4.4 has an additional term, T jγm. This term represents the vector of coefficients for the variables without the re- quirement of proportional odds at the cut point m, γm, and the corresponding observations, T j.

4.4 Results

Table 4.4 reports the results of a proportional odds and partial-proportional odds model of Wait Time in Line. Higher values of the dependent variable are representative of longer wait times. As such, positive regression coefficients indicate an increase in odds of a longer wait time.8 The results of likelihood ratio tests indicate a violation of the proportional odds assumption for the explanatory variables Paper-Based System and Early Voting. As a result, it is appropriate to use the partial proportional odds model which relaxes the proportional odds assumption for the variables Paper-Based System and Early Voting.

The partial proportional odds model presented in the second column of Table 4.4 presents evidence that voters in localities that made contracts

8See Equations 4.2 and 4.4.

100 Table 4.4: Proportional Odds and Partial Proportional Odds Regression Mod- els of Wait Time in Line

Proportional Odds Partial Proportional Odds Value Added Reseller -0.803 (0.286)∗∗ -0.833 (0.287)∗∗ Vendor Services: Training -0.336 (0.263) -0.338 (0.262) Project Management 0.380 (0.171)∗ 0.370 (0.169)∗ Election Day Support 0.257 (0.171) 0.273 (0.172) Voter Outreach 0.015 (0.137) 0.010 (0.137) Election Administration: Paper-Based System -0.450 (0.163)∗∗ Peak Voting Period 0.178 (0.105) 0.178 (0.105) 100 Reg.Voters Per Precinct 0.038 (0.071) 0.035 (0.071) Demographics: Early Voting 0.136 (0.120) Democrat 0.172 (0.109) 0.174 (0.109) Race: White 0.174 (0.137) 0.178 (0.136) Age -0.011 (0.003)∗∗∗ -0.011 (0.003)∗∗∗ Female -0.363 (0.106)∗∗∗ -0.363 (0.106)∗∗∗ Family Income -0.011 (0.014) -0.011 (0.014) 1—2. (Intercept) -0.951 (0.306)∗∗ -0.920 (0.306)∗∗ 2—3. (Intercept) 0.813 (0.306)∗∗ 0.772 (0.311)∗ 1—2. Paper-Based System 0.338 (0.169)∗ 2—3. Paper-Based System 0.635 (0.186)∗∗∗ 1—2. Early Voting -0.046 (0.129) 2—3. Early Voting -0.284 (0.154) AIC 2831.420 2826.912 BIC 2915.190 2921.153 Log Likelihood -1399.710 -1395.456 Num. obs. 1388 1388 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05

101 with value added resellers of election equipment had on average lower odds of waiting in line.9 The odds of waiting in a longer line for voters in a locality pur- chasing equipment and services from value added resellers decreases by 57%. In contrast, the odds of waiting in a longer line increase by 45% when localities purchase project management services from the private sector. Paper-based voting system also appear to decrease the odds of waiting in line. Respondents in counties with paper-based systems reported a statistically significant lower likelihood of waiting in a longer line.

Table 4.5 presents the results of proportional odds regression models for in-person voter evaluations of polling place and poll-worker performance. Nei- ther model exhibited likelihood ratio statistics indicating violation of the pro- portional odds assumption for any of the explanatory variables. Both polling place and poll-worker evaluations are used as proxies for evaluations of the voting experience. A positive coefficient estimate in both models signifies an increase in the odds of a negative evaluation.

The results of both models in Table 4.5 do not present statistically sig- nificant evidence supporting the notion that localities that purchase election services from the private sector perform relatively better on average than local- ities that did not. As conventional wisdom suggests, the odds of more negative polling place and poll-worker performance ratings were increased when voters

9It is important to note that when interpreting models that use the logistic link func- tion, the coefficient estimates are in the form of log odds. A simple transformation of the coefficient estimates (in log odds) via the exponential function produces predicted odds.

102 Table 4.5: Proportional Odds Regression Model of Polling Place and Poll- Worker Performance

Polling Place Performance Poll-Worker Performance Wait Time in Line 0.895 (0.105)∗∗∗ 0.389 (0.083)∗∗∗ Value Added Reseller -0.184 (0.450) -0.154 (0.344) Vendor Services: Training 0.051 (0.435) 0.448 (0.341) Project Management 0.210 (0.248) 0.220 (0.203) Election Day Support 0.210 (0.267) -0.089 (0.208) Voter Outreach 0.171 (0.200) 0.015 (0.163) Election Administration: Paper-Based System -0.062 (0.243) -0.251 (0.194) Peak Voting Period 0.104 (0.157) 0.168 (0.127) 100 Reg.Voters Per Precinct -0.063 (0.111) 0.050 (0.084) Early Voting -0.675 (0.198)∗∗∗ -0.118 (0.147) Demographics: Democrat -0.026 (0.166) -0.137 (0.134) Race: White -0.143 (0.197) -0.392 (0.158)∗ Age -0.022 (0.005)∗∗∗ -0.019 (0.004)∗∗∗ Female -0.029 (0.160) -0.103 (0.128) Family Income -0.020 (0.022) -0.016 (0.017) 1—2. (Intercept) 2.137 (0.528)∗∗∗ 0.696 (0.419) 2—3. (Intercept) 4.589 (0.561)∗∗∗ 3.407 (0.448)∗∗∗ AIC 1275.660 1792.201 BIC 1364.592 1881.158 Log Likelihood -620.830 -879.101 Num. obs. 1382 1384 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05

103 waited for longer periods of time in line. Voters who cast their ballot during early voting periods were 51% more likely to recognize high quality polling place performances. Older voters were also more likely to give positive ratings to their polling places and poll-workers.

Table 4.6 presents the results of a proportional odds and partial pro- portional odds regression model of Vote Counted as Cast. Higher levels of Vote Counted as Cast indicate lower levels of confidence. The result of a likelihood ratio test of the explanatory variable Female suggests a violation of the pro- portional odds assumption. As expected, more negative evaluations of polling places and poll-workers increase the odds of a lower feeling of confidence in vote counted as cast by 73% and 209%, respectively. In contrast, individual-level variables appear to increase the odds of a higher sense of confidence. Being a Democrat, White, one year older, and one level of income richer increases the odds of being more confident by about 57%.

The direction of the coefficient estimate for Wait Time in Line is un- expected. According to the partial proportional odds model, a longer waiting period increases the log odds of less confidence in vote counted as cast by voters. Given the literature on determinants of turnout, when voters are faced with obstacles to voting, such as a line, the likelihood of abstaining from cast- ing a ballot will disproportionately impact those with low levels of political efficacy.10 In the case of this analysis, it is likely that the voters with lower

10Riker and Ordeshook (1968, p.28) include an additional term to the Calculus of Voting to encompass the “satisfaction of affirming one’s efficacy in the political system”.

104 Table 4.6: Proportional Odds Regression Model of Confidence in Vote Counted as Cast

Proportional Odds Partial Proportional Odds Polling Place Performance 0.548 (0.159)∗∗∗ 0.546 (0.159)∗∗∗ Poll-Worker Performance 1.126 (0.134)∗∗∗ 1.129 (0.134)∗∗∗ Wait Time in Line -0.191 (0.093)∗ -0.192 (0.093)∗ Value Added Reseller 0.087 (0.358) 0.086 (0.358) Vendor Services: Training -0.375 (0.345) -0.378 (0.346) Project Management -0.171 (0.220) -0.168 (0.220) Election Day Support 0.309 (0.235) 0.308 (0.235) Voter Outreach -0.086 (0.175) -0.078 (0.175) Election Administration: Paper-Based System 0.049 (0.211) 0.053 (0.211) Peak Voting Period 0.026 (0.137) 0.029 (0.137) 100 Reg.Voters Per Precinct -0.005 (0.088) -0.010 (0.088) Early Voting 0.107 (0.156) 0.107 (0.156) Demographics: Democrat -0.410 (0.146)∗∗ -0.418 (0.146)∗∗ Race: White -0.317 (0.168) -0.330 (0.168)∗ Age -0.017 (0.004)∗∗∗ -0.017 (0.004)∗∗∗ Female -0.047 (0.137) Family Income -0.068 (0.020)∗∗∗ -0.068 (0.020)∗∗∗ 1—2. (Intercept) 1.024 (0.470)∗ 0.989 (0.471)∗ 2—3. (Intercept) 3.426 (0.493)∗∗∗ 3.814 (0.530)∗∗∗ 1—2. Female 0.104 (0.139) 2—3. Female -0.605 (0.312) AIC 1657.601 1653.771 BIC 1756.592 1757.973 Log Likelihood -809.800 -806.886 Num. obs. 1353 1353 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05

105 levels are disproportionately self-selecting out of the sample of in-person voters when faced with lines (see Table A2).11

4.5 Who Purchases Vendor Services?

To paint a more detailed picture of the use of election vendor services in the United States, it is important to include a secondary analysis of county- level services purchases. As previously noted, the decision to purchase vendor services is generally at the discretion of election administrators. In most lo- calities, election administrators have the freedom to pick and choose which services best suit the needs of their district.

Table 4.7 reports the results of four logistic regression models for the use of training, Election Day support, project management, and voter educa- tion services across a selection of counties.12 To complement individual level characteristics in the previous analyses, county-level demographic and socio- economic characteristics (per capita income, percent White, percent Black, median age, percent college educated, and total population) from the Ameri- can Community Survey 5-Year estimates were included.

11The cross-tabulation presented in Table A2 demonstrates that there are differences in confidence in the county vote count between non-voters who confessed that they did not vote (in part) because lines were too long. Although confidence in county vote count is not as proximate to voters as confidence in their own vote being counted as cast (Sances and Stewart 2015; Gronke 2014), non-voters were not asked a variation of the latter in the SPAE. 12Data used for this analysis comes from county contracts for the acquisition of a voting system in California, Connecticut, Delaware, Illinois, Kentucky, Nebraska, Nevada, New Mexico, Ohio, Rhode Island, Texas, Utah, Vermont, and Virginia. A description of the logistic regression model is provided in the Appendix.

106 Table 4.7: Binary Logistic Regression Models of Vendor Service Purchase by Counties

Training Election Day Support Project Management Voter Education (Intercept) -6.365 (1.748)∗∗∗ -0.735 (0.955) -2.056 (1.027)∗ 2.984 (1.146)∗∗ 100 Reg.Voters Per Precinct 0.275 (0.246) -0.178 (0.137) 0.243 (0.166) -0.001 (0.102) Urban-Rural 0.022 (0.112) -0.180 (0.080)∗ 0.076 (0.083) -0.184 (0.091)∗ Per Capita Income 0.000 (0.000) 0.000 (0.000) -0.000 (0.000) -0.000 (0.000) % Hispanic 0.050 (0.009)∗∗∗ 0.028 (0.005)∗∗∗ 0.024 (0.006)∗∗∗ -0.009 (0.006) 107 % Black 0.015 (0.022) 0.063 (0.019)∗∗∗ 0.011 (0.019) -0.001 (0.021) % College 0.123 (0.071) 0.005 (0.041) 0.082 (0.045) 0.092 (0.052) Median Age 0.093 (0.035)∗∗ -0.017 (0.019) 0.028 (0.021) -0.093 (0.025)∗∗∗ Statewide Contract 4.902 (1.047)∗∗∗ 0.973 (0.224)∗∗∗ 1.266 (0.243)∗∗∗ 0.077 (0.269) AIC 375.121 714.428 662.648 519.771 BIC 414.040 753.347 701.567 558.691 Log Likelihood -178.561 -348.214 -322.324 -250.886 Deviance 357.121 696.428 644.648 501.771 Num. obs. 558 558 558 558 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05 To address concerns that the choice to purchase services is purely a function of financial resources, Per Capital Income was included in the models. In none of the models does the coefficient estimate for Per Capita Income reach statistical significance. Rather, population density is a better predictor of services purchased. More rural counties have higher odds of purchasing Election Day support and Voter Education programs.

In terms of demographic characteristics, counties with higher propor- tions of Hispanic residents have higher odds of purchasing training, election support, and project management services. Counties with older populations are more likely to purchase training programs, but are less likely to purchase voter education programs. Although the reasons for why localities purchase these services is beyond the scope of this analysis, it is likely that counties with a relatively higher proportion of younger and less experienced voters will have different needs than those with relatively more elderly voters. 13

Finally, the variable Statewide Contract is an indicator for localities using vendors who have made statewide contracts rather than through bidding processes at the county-level. It is not surprising that the purchase of vendor services is much more likely in states with centralized voting systems. It is reasonable to assume that states have more interest in looking for “one size fits all” solutions. Moreover, it is likely that states are less constrained financially

13Herrnson et al. (2008) find that people with little computer experience, little voting experience, lower income, older, and female were the most likely to report that they felt the need for help in using voting equipment in the polling place.

108 than most localities when it comes to financing elections.

4.6 Conclusion and Discussion

Taking all the evidence presented in this analysis into account, the choice of localities to purchase services from election services vendors is not a panacea to solving election performance issues in the United States. In fact, the statistical evidence presented suggests that voters in counties relying on vendors for project management have higher odds of waiting in a longer line. At first glance, this finding suggests that project managers from election services vendors are less apt at keeping lines at bay compared to in-house project managers. Although this study does not examine the plights of local election offices directly, it is likely that counties that are attracted to project management services are also those less able to employ staff with the expertise necessary for running elections.

Following the implementation of the Help America Vote Act of 2002 (HAVA), the election administrative system has become increasingly complex, leaving the burden on local election administrators to navigate and imple- ment changes (Montjoy 2008). In a survey of local election officials, Kimball et al. (2013, p.567) report that election officials interpret their policy envi- ronment as administratively burdensome due to “an ongoing set of unfamiliar requirements that have made their life more difficult”. Currently, election ad- ministrators feel increased pressure to find “quick-fixes” and act as problem- solvers. Many administrators find it difficult to keep up with financial and

109 labor costs associated with the requirements set by HAVA. Recent research shows that there are many non-trivial additional costs (new ballot forms, ad- ditional hours worked, rental space, etc.) associated with upgrading voting equipment.

By contracting out services to election services vendors some localities are able to benefit from the specialized expertise without hiring staff with de- voted solely to the elections. Having election services vendors take on some election responsibilities can free up local election officials for other duties as well as avoid complications related to hiring new staff (i.e. salary, insurance, union membership, benefits). Considering the lack of statistical evidence sug- gesting a noticeable degradation in performance in the eyes of voters, pur- chasing these services from election services vendors may be a satisfactory alternative to employing a full-service election staff.

Although there does not appear to be any significant gains in contract- ing out services to private sector election services vendors, there is public value being created. When localities are contracting out services, it is unlikely that they are also providing those services in-house. As such, vendors are facilitat- ing the conduct of elections by providing localities with an efficient means of benefiting from specialized expertise at a level that is indistinguishable from localities relying on public sector services.

110 Chapter 5

How Previous Election Experiences Influence Individuals’ Decisions to Participate in Future Elections

5.1 Introduction

As previously mentioned in Chapter 1, some might characterize the United States as facing a crisis in terms of election integrity in light of a series of election catastrophes starting in 2000. Indeed, the lack of confidence in our electoral institutions continues to be a pervasive problem for American democracy. But why do voters feel this way? Is it just the rhetoric of political elites? Or are voters experiencing problems that have heretofore been under- appreciated? Given the research today, there still does not seem to be enough information out there to understand the dynamics of this issue. Instead, schol- ars have focused on questions relating to voting behavior and the development of political attitudes. This leaves the effect of election procedures on Election Day experiences and confidence in elections not well understood. In this chap- ter, I propose a theory for understanding the dynamic nature of confidence in election and its impact individual-level turnout as well as support that theory with statistical evidence.

111 5.2 Measuring Confidence in Elections

In the broadest sense, political trust is “associated with questions of identification with, or estrangement from, political institutions, symbols and values” (Miller 1974, p.989-990). Hill (1981, p.257) argues that “researchers’ dependence on global measures of political trust has led to the overestimation of the extent to which public confidence has eroded”. Hill (1981) stresses the importance of using attitude indicators with a clear object orientation. Voter confidence, in particular, focuses on the procedures of democracy. Sances and Stewart (2015) and Atkeson and Saunders (2007) narrow the measure of interest to election administration scholars of confidence in elections to the counting of votes, specifically. Both prefer the use of a measure along the lines of “whether a voter believes her vote will actually be counted as intended”.

Another important consideration is proximity. Sances and Stewart (2015) find evidence that the degree of confidence in election administration can be conditional on the proximity to the voter. When phrasing confidence in vote count in terms of nationwide vote count (“around the country”) compared to personal vote counted as cast (“your own vote”), the proportion of “very confident” respondents dropped substantially. They attribute this difference to the fact that voters have more information about the election practices at the local level which makes those individuals better able to accept an unfavorable election outcome.

112 5.2.1 Winner’s Effect

The most dominant theory in understanding change in confidence in electoral institutions is the “Winner’s effect”. Studies on political trust have found evidence that individuals’ attitudes of political trust are influenced by the partisanship of those who are in power (Holberg 1999, Norris 1999). Aptly referred to as the “Home Team hypothesis”, generalized political trust is higher among those whose preferred party controls political institutions. In the con- text of political trust in electoral institutions, this tendency is known in the literature as the “Winner’s Effect”. In a cross-national analysis of 25 democ- racies, Norris (1999) finds that whether an individual perceived himself as a winner or loser in the political system was a strong indicator of institutional confidence.

Interested in determining the types of individuals most prone to the “Winner’s effect”, Sinclair, Smith, and Tucker (2018) leverages then-candidate Donald Trump’s rhetoric and claims about election rigging prior to the 2016 election to investigate the magnitude of elite messaging on individuals’ beliefs of confidence in vote count in the 2016 election using a five-wave panel study.1 They find strong evidence in support of the existence of a “Winner’s Effect” for Trump voters as well as a commensurate “Loser’s Effect” for Clinton voters. Prior to the election, elite cues were particularly effective among partisans. An individual’s level of political sophistication and tendency for conspiracism

1An example of this type of rhetoric was cited in Chapter 1.

113 mitigated the “Winner’s effect” with positive and negative overall effects on confidence in the vote count, respectively.

Perhaps most relevant to this analysis, Sinclair, Smith, and Tucker (2018) find that the “Winner’s Effect” (and a symmetric “Loser’s Effect”) persisted in the months following the election. In fact, Sinclair, Smith, and Tucker (2018) found that the gap in confidence increased at a non-statistically significant level three months following the 2016 election. Although this finding is not dispositive, it does lend evidence in support of the notion that the effects of an election can have an enduring effect on individual attitudes of confidence and political trust.

5.2.2 Election Experiences

Using survey data of registered voters from the 2006 midterm election in two congressional districts in Colorado and New Mexico, Atkeson and Saun- ders (2007) find that voter confidence following election is impacted by the quality of the voter’s previous election experiences. In particular, evaluations of poll-workers and the enjoyment of the experience were predictive of voter confidence. Alternative voting methods, however, had an effect on confidence. Early voters and absentee voters were less confident. Atkeson and Saunders (2007) suggest that this is because there is a greater disconnection between the voter and the final on Election Day. Atkeson and Saunders argue that LEOs should have greater visibility because 37% of respondents could not identify their LEO. Given their findings, if voters were more familiar with

114 their local election officials (in a positive, competent, and nonpartisan way), they would feel more confident in their election system.

5.2.2.1 Voting Machines

Field experiments by Jong, Hoof, and Gosselt (2008) and Herrnson et al. (2008) have tested for the impact of voter interactions with voting equip- ment on confidence in the electoral process. Herrnson et al. (2008) conducted a large-scale usability field study of a representative sample of voting equipment consisting of six different voting devices from six different vendors. They find that across devices of the same type, there were differences in usability and confidence rating across devices. Touchscreen DRE machines received high voter confidence ratings, while the optical scan devices were not so highly rated. Herrnson et al. (2008) argue that the explanation for this trend is that error correction with paper ballots is not so simple. Voters must either obtain a new ballot or manually erase. Obtaining a new ballot may be stressful if the voter is prone to errors and erasing may leave marks. Additionally, paper bal- lots do not have a review function which allows voters to make absolutely sure they completed the ballot as they intended. Similarly, in a field experiment in the Netherlands, Jong, Hoof, and Gosselt (2008) found that voters felt more privacy and anonymity with paper ballots, but had more confidence that their vote was counted and ease of use with voting machines.

One widely used method of improving confidence in casting ballots with electronic voting systems is the voter verified paper audit trails (VVPAT).

115 There is mixed evidence in the literature in the effectiveness of machines fit- ted to produce voter verified paper audit trails in increasing confidence in vote counted as cast. Atkeson and Saunders (2007) find that the quality of a voter’s election experience impacts their level of confidence in elections. Those who voted with machines fitted to produce voter verified paper audit trails had relatively higher confidence in their vote being counted as cast. Despite the findings of Atkeson and Saunders (2007), Herrnson et al. (2008) contend that implementing voter verified paper audit trails is not a panacea for increasing voter confidence. Other usability features (i.e. correcting mistakes) may have more of an effect on voter confidence and on overall satisfaction. Likewise, Jong, Hoof, and Gosselt (2008) find no significant differences in subjects’ per- ceptions of their voting experiences between voting machines with and without voter verified paper audit trails.

5.2.2.2 The Voting Experience

In their examination of voter-poll worker interaction, Hall, Monson, and Patterson (2009) find that higher evaluations of poll workers is a statistically significant predictor of higher overall confidence in the electoral process. Addi- tionally, using data from exit polls from approximately 50 polling locations in two counties in Ohio in 2006, Claassen et al. (2008) find that more voter-poll worker interaction corresponded to more positive evaluations. Older voters and voters who know a poll worker are more positive in their poll worker eval- uations. It is not surprising that they found negative experiences like having

116 an ID rejected to be a predictor of a higher likelihood of a negative evalua- tion. Similarly, wait times, confusion with instructions, and confusion with how to use the equipment were also negative predictors of poll worker evalua- tion. Claassen et al. (2008) argue that voter-poll worker interactions are much like commercial service encounters. Much like a salesperson at a retail store, poll workers are an important part in determining the quality of the voting experience. Most importantly, when voters have a positive experience with poll workers, they tend to feel more confident that their vote was counted.

5.3 The Endurance of the Election Experience in the Minds of Voters

One important, yet excluded element to the calculus of voting is the in- corporation of the duration of effects. Election administration and the election experience is not limited to Election Day. Election administration includes both the pre- and the post-election process (Erameh 2018). As previously noted, there is scant research examining the ongoing relationship between confidence in elections and voting experiences. This study borrows from the calculus of voting framework (described in Chapter 2) to develop and test a theory of how these variables influence one-another over time. The hypothe- sized global relationship of the variables central to these hypotheses is depicted in Figure 5.1.

My theory for how confidence in elections, election experiences, and turnout are related is very much exploratory. Although there have been a

117 Figure 5.1: Hypothesized Ongoing Relationship Between Confidence in Elections, Voting Experiences, and Individual-level Turnout. 118 handful of studies examining this topic, the state of our understanding is still far from comprehensive. The following empirical analysis will test four main hypotheses:

H1: Previous positive election experiences increase the likelihood of an indi- vidual’s choice to participate in future elections.

H2: Higher levels of confidence in elections increase the likelihood of an indi- vidual’s choice to participate in future elections.

H3: An individual’s level of confidence prior to an election is positively related to an individual’s level of confidence following an election.

H4: Positive election experiences increase the likelihood of higher levels of confidence in elections.

5.4 Data and Methods

Data used in this analysis came from the University of Texas module (N=1000) of the 2018 Cooperative Congressional Election Survey (CCES). The 2018 CCES is a nationally representative two-wave online survey conducted on September 28, 2018 and on November 21, 2018. Between waves, the 2018 Midterm Election occurred on November 6, 2018. To measure the effect of the election experience on feelings of confidence in elections, survey items on voter confidence were asked on both the pre-election and post-election waves of the CCES 2018 survey. The raw distributions of the variables used in this analysis as well as the item text are available in the Appendix.

119 Given that the vast majority of American elections are conducted at the county level, it is important to include county-level election administration variables in this analysis. A crosswalk provided by the U.S. Census Bureau was used to match respondent zip codes to a FIPS (Federal Information Processing Standards) county identifier. After matching, the 1,000 respondents in the University of Texas module of 2018 CCES indicated that their residency was located in one of 491 unique counties.

FIPS codes were then used to match CCES data to a data set on equip- ment type created by Verified Voting. Verified Voting maintains a complete database of county-level voting system information in the United States. Using this data, it is possible to generate indicators for the presence of a specific type of voting equipment with their county’s voting system (paper-based, DRE as well as another indicator for whether respondents had mixed paper and DRE) with which in-person voters have had interactions.2

Confidence in Elections

Similar to the methodology used in the analysis of individual-level pat- terns of change in confidence in vote count confidence by Sinclair, Smith, and Tucker (2018) and Sances and Stewart (2015), I examine the effect of elec- tion experiences on change in confidence in elections by comparing pre-election September 2018 and post-election November 2018 survey data. Parallel survey items were asked on the pre- and post- 2018 election waves of the University

2This is in contrast to tabulation systems used to count absentee ballots which are exclusively paper-based.

120 Table 5.1: Confidence in Vote Counted as Cast Survey Items

Item Question Wording Response Options How confident are you that your vote will be 1. Not at all confident Pre-Election counted as you intended if you vote in the 2. Somewhat confident Confidence November 2018 General Election? 3. Very confident Post-Election How confident are you that your vote in the 1. Not at all confident Confidence 2018 General Election was counted as you 2. Somewhat confident (Voters) intended? 3. Very confident If you were to have voted in the 2018 Post-Election 1. Not at all confident General Election, how confident are you that Confidence 2. Somewhat confident your vote would have been counted as you (Non voters) 3. Very confident intended? of Texas module CCES (Table 5.1).

Prior Voting Experiences

The assessment of prior election experiences is measured using a bat- tery of retrospective assessments of the prior voting experience included in the pre-election wave of the 2018 CCES. The results of a principal compo- nents analysis suggest that there are two latent variables underlying the three variables directly measuring the in-person voting experience. The variables Polling Place Performance and Poll-Worker Performance load on one prin- cipal component, while Problems with Equipment loads on the other. These results are presented in Figure 5.2.

Due to the high correlation between traditional measures of polling place performance and poll worker performance, the indicator variable Less than OK Voting Experience was created. This variable is coded as “1” for

121 Figure 5.2: Diagram of Factor Loading of Election Experience Variables

122 respondents who indicated that their assessment of the polling place was less than “okay” or that poll-workers were performing at a level less than “good”. A value of “1” for the variable Less than OK Voting Experience signifies that the voter witnessed or encountered less than satisfactory election administration at their polling place.

In addition to Less than OK Voting Experience, two other variables assessing the voting experience were included in the 2018 CCES. The post- election wave included an item asking respondents to recall the length of time they waited in line to vote in the 2018 Midterm elections as well as an indicator variable for whether they recall encountering problems with voting equipment (Problems with Equipment). The variable measuring length of wait time in line was recoded into a binary indicator for 30 minutes or more in line.3

Winner’s Effect

To control for changes in confidence as a result of a “winner’s effect” a variable measuring the proportion of an individual’s preferred candidates for U.S. House, U.S. Senate, and Governor who won election in 2018. The variable Winner’s Effect is a proportion rather than an indicator because of the lack of uniformity in top races across the country. The denominator of this variable is conditional on the state of the respondent. The formula for

3Stewart (2013b) presents survey evidence that a substantial number of voters are un- willing to wait more than 30 minutes or more in line to vote.

123 Winner’s Effect is presented in Equation 5.1. Count of Winning Candidates in Races for U.S. Senate, U.S. House, and Governor Winner’s Effect = (5.1) Number of Races Conditional on State

Although all eligible voters had the opportunity to vote for U.S. Rep- resentative, not all eligible voters live in states with races for U.S. Senator and Governor in 2018. For the average American, races for U.S. Representatives are not as salient as U.S. Senate and Governor. As such, an indicator for the victory of an individual’s preferred U.S. House candidate used as a measure for an overall “winner’s effect” would not be appropriate when other races are available for use in crafting this measure.

Other Relevant Predictors of Turnout

In addition to the primary variables of interest to my hypotheses, other relevant predictors of turnout were included in this analysis. As previously mentioned in Chapter 2, an important predictor of turnout according to the calculus of voting is Civic Duty or the “D” term. Civic duty is operationalized in this study as an indicator variable of an individual’s belief that “voting is a duty” rather than a “choice”. The social economic status variables race and education are also included.

As a means to include non-voters from 2016 into the sample, the vari- able Trump Approval was used rather than an indicator of a vote for President Trump in 2016.4 Compared to 2016 vote choice, Trump Approval at the time

4It is necessary to have complete cases of observations for the estimation of generalized linear models in R using the packages “ordinal”.

124 pre-election survey wave (September 2018) is also more proximate to individ- uals’ decision-making immediately prior to the 2018 election. In addition to Trump Approval, a seven-point scale for partisan identification was included.

5.5 Results

To test my hypotheses on future decisions to participate in elections and shifts in confidence in elections I use logistic regression analysis.

5.5.1 Pre-Election Confidence and Recollection of the 2016 Voting Experience on 2018 Turnout

To test hypotheses H1 and H2, a binary logistic regression model of election-related variables was estimated. The results of previous election ex- periences (or lack thereof) on Voting in 2018 is presented in Table 5.2. When there is a nested structure within the variables, omitting the main effect of a nested variable included an interaction is equivalent to the model without interaction. To improve the interpretability of the model, the main effects of In-Person Voting, Polling Place Performance, and Poll-Worker Performance were omitted.

Consistent with the literature on the calculus of voting cited in Chap- ter 2, the variable Civic Duty was also a statistically significant and posi- tive predictor of turnout in 2018. The variable with the strongest impact on individual-level turnout is respondent participation in the 2016 general elec- tion. Regardless of other factors relating to the 2018 election, those who had

125 Table 5.2: Binary Logistic Regression Model of Turnout, 2018

Dependent variable: Voted in 2018 Pre-Election Confidence=Not at all −0.979∗ (0.462) Pre-Election Confidence=Somewhat 0.038 (0.350) Voted2016 3.932∗∗∗ (0.765) NV Reason ID 0.152 (0.659) NV Reason Inconvenience 0.749 (0.744) NV Reason Lines −0.707 (0.949) Trump Approval 0.035 (0.179) PID (7pt) −0.178 (0.101) Civic Duty 0.937∗∗ (0.303) Black 0.550 (0.663) Hispanic −1.034∗ (0.457) Other Race −0.045 (0.580) Education 0.267∗ (0.110) Voted in 2016/In-Person 2016 −0.880 (0.631) Effects Nested Under Voted in 2016/In-Person 2016 : Less Than OK Experience −0.277 (0.362) Intercept −0.896 (0.735) Observations 791 Log Likelihood −175.476 Akaike Inf. Crit. 382.953 ∗p<0.05; ∗∗p<0.01; ∗∗∗p<0.001

126 previously voted had almost four times the logged odds of voting in 2018.

The results of Table 5.2 support the hypothesis that the level of an individual’s confidence prior to an election is predictive of turnout (H2). The coefficient estimates of the factor levels for Pre-Election Confidence are used to test this hypothesis. It is important to note that the reference category for Pre-Election Confidence are individuals who indicated that they were ”very confident” that their vote would be counted as cast in the upcoming 2018 Midterm election. The statistical significance of the coefficient estimate for Pre-Election Confidence=Not at all suggests that there is a non-zero drop in the probability of turning out between those who have the lowest level of confidence that their vote will be counted as cast relative to those with higher levels of confidence, all else being equal. In contrast, the lack of statistical significance for the coefficient Pre-Election Confidence=Somewhat indicates a failure to reject the null hypothesis of a difference in the likelihood of turnout between individuals who are “somewhat confident” and those who are “very confident”.

The effect of Pre-Election Confidence is particularly obvious when com- paring the predicted probability of turnout in 2018 between 2016 voters and 2016 non-voters. Figure 5.3 depicts these differences in predicted probabil- ity. Given the magnitude of the coefficient estimate for having voted in 2016, the effect of Pre-Election Confidence is quite minimal for 2016 voters even though the coefficient reaches statistical significance. In contrast, the effect of Pre-Election Confidence can be clearly observed for non-voters in 2016. The

127 Figure 5.3: Predicted Probability of Voting in 2018 for Voters and Non Voters in 2016 by Pre-Election Confidence

128 probability of voting in 2018 is over 50% lower for those non-voters who have the lowest level of Pre-Election Confidence compared to those with higher levels.

In contrast to the statistical evidence in support of H2, there is lim- ited statistical evidence in support of H1. Although the coefficient estimate for the indicator variable Less Than OK Experience fails to reach statistical significance, the direction of the effect is in the hypothesized direction.

5.5.2 Pre-Election Confidence and Recollection of the 2018 Voting Experience on Post-Election Confidence in Vote Counted As Cast

To test the hypotheses of whether individuals’ level of confidence prior to an election and positive election experiences are positively related to indi- viduals’ level of confidence following an election (H3 and H4), a proportional odds regression model was estimated.5

The results of a series of likelihood ratio tests of the proportional odds assumption for each variable provided sufficient indication that the choice to fit the proportional odds regression model presented in Table 5.3 is appropriate.6 Overall, there is strong evidence in support of H3, confidence in vote counted as cast prior to the election has a large and statistically significant impact on confidence in vote counted as cast after the election. Relative to those who

5This model is sometimes referred to as the “ordered logistic regression model”. 6For a more detailed discussion of the proportional odd assumption please refer to Chap- ter 4. The results and analysis of a Brant test as well as an alternative specification of the model using the partial proportional odds model is presented in the Appendix.

129 Table 5.3: Proportional Odds Regression Model of Post-Election Confidence in Vote Counted As Cast, 2018

Pre-Election Confidence=Not at all −3.49 (0.30)∗∗∗ Pre-Election Confidence=Somewhat −2.17 (0.22)∗∗∗ Voted in 2018 0.07 (0.38) NV Reason ID −0.04 (0.52) NV Reason Inconvenience −0.77 (0.46) NV Reason Lines −1.71 (0.89) PID (7 pt) −0.05 (0.04) Winner Score 1.42 (0.24)∗∗∗ Black −0.57 (0.33) Hispanic −0.05 (0.31) Other Race −0.31 (0.33) Education 0.02 (0.06) Voted in 2018/In-Person 2018 1.24 (0.26)∗∗∗ Effects Nested Under Voted in 2018/In-Person 2018 : Less Than OK Experience −3.97 (1.29)∗∗ Lines > 30 mins −0.64 (0.43) Equipment Problem −1.45 (0.64)∗ DRE −0.51 (0.24)∗ Mixed 0.21 (0.44) Not at all—Somewhat −3.61 (0.49)∗∗∗ Somewhat—Very −0.91 (0.47) AIC 970.24 BIC 1062.56 Log Likelihood -465.12 Num. obs. 747 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05

130 were “very confident” their votes would be counted as cast, those with lower levels of Pre-Election Confidence were substantially less likely to have a higher level of Post-Election Confidence.

The coefficient estimate for Less Than OK Experience supports the hy- pothesis of positive election experiences having a distinct and positive impact on Post-Election Confidence (H4). Those who were coded as “1” for the in- dicator variable Less Than OK Experience in 2018 (i.e. unsatisfactory polling place and poll-worker performance evaluations) had lower odds of a higher level of Post-Election Confidence than those who were coded as “0” for the indica- tor variable Less Than OK Experience in 2018.7 In addition, the coefficient estimates for Problems with voting Equipment was also negative and statisti- cally significant. The odds of a higher level of post-election confidence in vote counted as cast of those who experienced problems with voting equipment were substantially lower than the odds for those who in-person voters who did not experience problems with voting equipment. Taken together, those who had unsatisfactory in-person voting experiences, had lower odds of higher levels of post-election confidence than in-person voters with high-quality in-person voting experiences.

As further evidence in support of the notion that election experiences influence feelings of confidence, the coefficient estimate for DRE was negative

7The variable Less Than OK Experience presented in Table 5.3 is the measurement of 2018 election experiences. This is distinct from the variable of the same name presented in Table 5.2 measuring the recollection of 2016 election experiences.

131 and statistically significant. This suggests that those who used DRE voting systems relative to those who used paper-based or a mixture of paper and electronic voting systems had lower odds of a higher level of post-election confidence.

The findings in Table 5.3 are also consistent with the previous research on the “Winner’s effect”. The proportion of an individual’s most preferred candidates winning is positively related to attitudes of post-election confidence in vote counted as cast.

5.6 Conclusion and Discussion

In this chapter, an empirical examination of survey data pre- and post- the 2018 Midterm election finds support for a dynamic relationship between election experiences, confidence in vote counted as cast, and turnout. Overall, the results presented in the previous section are supportive of the hypotheses: H2, H3, and H4. These data suggest positive relationships between confi- dence in vote counted as cast prior to election and turnout (H2), pre- and post-election confidence in vote counted as cast (H3) as well as election ad- ministration and post-election confidence in vote counted as cast (H4).

These findings consistently suggest that election administration plays a direct role in citizen confidence in election. The lack of statistically significant associations between election administration and turnout, however, suggests that confidence in elections might work as a mediator. Although not dispos- itive, the results of a cursory mediation analysis presented in Table 5.4 lend

132 credence to this hypothesis.8

Table 5.4: Causal Mediation Analysis with Nonparametric Bootstrap Confi- dence Intervals

Mediator: Pre-Election Confidence=“Not at all” Treatment: Less Than OK Experience 2016 Dependent Variable: Voted in 2018 Estimate 95% CI Lower 95% CI Upper p-value ACME (More Than OK) -0.0110 -0.0173 -0.0005 0.024 * ACME (Less Than OK) -0.0126 -0.0197 0.0004 0.076 . ACME (Average) -0.0118 -0.0185 -0.0004 0.028 * ADE (More Than OK) -0.0110 -0.0401 0.0441 0.534 ADE (Less Than OK) -0.0126 -0.0434 0.0492 0.534 Total Effect -0.0236 -0.0515 0.0438 0.322 Note: The variable “Pre-Election Confidence” was recoded into a binary variable in which values for the level “Not at all” were coded as “1” and “Somewhat” and “Very” were coded as “0”. Estimates were generated using the R package “mediation”.

The mediation analysis in Table 5.4 presents statistical evidence in sup- port of the notion that confidence in vote counted as cast prior to an election acts as a mediator between election experiences (“Less Than OK Experience”) and individual-level turnout.9 Using a natural experiment framework, the most appropriate variable to consider as the “treatment” is Less Than OK

8In terms of mediation analysis methodology, mediation analysis with categorical vari- ables is considered to be “the final frontier” (Iacobucci 2012). What makes mediation with π2 categorical variables problematic is that the residuals in logistic regression are fixed at 3 . Given this, it is impossible to compare regression coefficients across equations in the tra- ditional manner (MacKinnon and Cox 2012). Fortunately, there has been progress in the field. The “mediation” package in R uses an algorithm based on non-parametric bootstrap that is capable of estimating confidence intervals for binary variables (Tingley et al. 2014). 9Estimates were generated using the R package “mediation”.

133 Experience 2016.10 The statistical significance of the estimate for the aver- age causal mediation effect (ACME) is evidence in support of the existence of a mediating effect of Pre-Election Confidence on election experiences and turnout.

It is also worth noting that these data (Table 5.2 show strong evidence in support of the inclusion of a citizen duty measuring in the prediction of individual-level turnout. The results show that an individual’s belief that voting is a “duty” rather than a “choice” is one of the most consequential predictors of turnout in the 2018 election. This finding further supports the inclusion of the “D-term” in the Calculus of voting.

Likewise, these results are complementary to those of Sinclair, Smith, and Tucker (2018) and Sances and Stewart (2015) in that I find further evi- dence for the importance of the “winner’s effect” in predicting and individual’s level of confidence in vote counted as cast. My findings expand on these two studies by including election administration the understanding of the relation- ship. In both regression analyses, the variable Less Than OK Experience 2016 proved to be a statistically significant indicator of individual-level confidence in elections.

The other statistically significant election administration predictor of confidence in election was an indicator of in-person voting equipment, DRE. Despite the statistical evidence suggesting that DRE machines increase the

10As prior election experience occurred years prior to the 2018 Midterm election, it is possible to assume ignorability.

134 odds of lower confidence, a more in-depth analysis is required. As voting technology develops, future work in the same vein as Jong, Hoof, and Gosselt (2008) and Herrnson et al. (2008) would be welcomed.

135 Chapter 6

Conclusion

Throughout this dissertation, I have presented empirical evidence through rigorous statistical analysis supporting a broad theory of how election adminis- tration has an indirect impact on individual-level and aggregate-level turnout by way of raising or lowering the costs of voting and confidence in elections. The graphical representation of my theory (originally presented in Chapter 2) is reprinted below in Figure 6.1.

Throughout the three empirical chapters of this dissertation, evidence was presented for how high-quality election administration can have both short-term and long-term effects on voters’ propensity to turnout. Positive election experiences from the previous election are consequential to turnout in

Figure 6.1: Theoretical Contribution

136 the following election. As suggested by the findings in Chapter 5, the quality of election experiences indirectly influences individual-level turnout through confidence in elections. Likewise, the quality of election experiences is posi- tively correlated to an individual’s level of confidence in voted as cast both prior to and following an election.

In the broadest sense, I posited that by spending more on election ad- ministration, election administrators can improve turnout by way of improving the quality of the election experience. The main instance investigated by this dissertation in which there were statistically significant evidence for this re- lationship was in the case of spending on voting equipment. In a spatially weighted regression analysis of county-level turnout (Chapter 3), I find sta- tistical evidence of a correlation between spending on voting equipment and county-level turnout.

6.1 Spending on Elections

Chapter 3 of this dissertation is one of the first in social science schol- arship to investigate the relationship between spending and the quality of election administration. Despite these findings, which investigate aggregate- level patterns, there are compelling reasons why some localities may rationally deviate from the theory of improving election quality presented in the previous chapters.

Why might localities rationally want to spend less on election adminis- tration? One answer might be that revenue is scarce for most localities, election

137 administration is not in continual focus for county officials. For county govern- ment, things like education, safety, and transportation have a higher priority in the budget-making process. Moreover, many county officials involved in crafting the budget do not believe that they were elected specifically for rea- sons related to updating voting equipment. Rather, these officials believe they were elected for their work with other more salient programs.

Outside of the budget, there are other practical considerations that might keep a county from updating their equipment. The most obvious is that the local official responsible for running elections might have other re- sponsibilities. The role of “chief election official” might be one of multiple “hats” these government officials wear. In one of my conversations with lo- cal election officials during my data collection for this project, I spoke with a county clerk from rural Kentucky. He remarked that he is aware that his equipment is in need of replacement, but when money comes into his budget he is going to opt for purchasing a backhoe over a new set of voting machines. Backhoes can fill potholes, dig ditches, and be useful in many other ways. For his constituents, potholes matter more than voting machines. Until there is a major disaster with the equipment he currently has, he is going to spend any discretionary funds on a new fleet of backhoes.

6.2 Improving Measurement in Election Administration

One of the major limitations of this dissertation involves the measure- ment of aspects of the voting experience. The variables used throughout the

138 analyses in Chapters 3 through 5 are commonplace in the election adminis- tration scholarship. Unfortunately, some of the variables most vital to un- derstanding the election experience need reexamination. The variables with the most need of reevaluation are Polling Place Performance and Poll-Worker Performance. Typically, these variables are asked in the following manner:

How well were things run at the polling place where you voted?

1. Very well - I did not see any problems at the polling place

2. Okay - I saw some minor problems, but nothing that interfered with peo- ple voting

3. Not well - I saw some minor problems that affected the ability of a few people to vote

4. Terrible - I saw some major problems that affected the ability of many people to vote

5. I don’t know

Please rate the job performance of the poll workers at the polling place where you voted.

1. Excellent

2. Good

3. Fair

139 4. Poor

5. I don’t know

Although the response options appear to capture the gamut the la- tent continuous variable, the distributions of the discrete response options are highly skewed. Figure 6.2 depicts the distribution of Polling Place Perfor- mance and Poll-Worker Performance using data from the University of Texas Module of 2018 Cooperative Congressional Election Study from Chapter 5.

Perhaps more concerning, both of these variables are highly correlated with one another. As both variables are likely to be used in the same analysis, this is problematic for regression analysis. As a result, I propose two potential solutions. First, these variables could include more response options with a particular intent to create more variation on the positive end of the spectrum. Second, a shift from an ordinal factor variable to a battery of binary variables indicating whether the respondent witnessed or experienced positive or nega- tive behavior. This solution has the potential to lend itself well to latent class analysis (Clogg, Sobel, and Arminger 1995; Lazarsfeld and Henry 1968).

In addition to the dilemmas of measurement, I propose the develop- ment of variables geared towards measuring the difficulty of casting a ballot. Throughout the political science scholarship on election administration, there is little systematic analysis of procedural difficulty outside of the in-depth dis- cussion of voter identification reform. One exception is the measurement of Wait Time in Line. Unfortunately, this measure (as it is commonly asked)

140 Figure 6.2: Stacked Barplot of Polling Place Performance and Poll-Worker Performance, 2016-2018

141 is limited to measuring the line that voters actually waited in, not the wait time that potential voters were willing or able to wait in line in order to cast a ballot.1 As lines are equated to “national embarrassment” in American democ- racy, it is prudent to continue to improve our measurement and understanding of how obstacles like lines impact American elections (Levitt 2013, p.470).

In contrast to the limitations characterizing the measurement of the election experience, this dissertation originates new and useful variables to the election administration scholarship. Through the collection of county- level contract data of spending on the acquisition of voting systems through open record requests, the variables: Spending on Voting Equipment, Spending on Vendor Services as well as indicators for the purchase of specific vendor services themselves, have made a debut in the election administration empirical scholarship.

The statistical evidence in support of the central research hypothesis of Chapter 3 provides evidence for the existence of an important, yet under- studied aspect of how elections in the United States are conducted. I find that spending on voting equipment has a positive and statistically significant effect on turnout. This line of research has the potential to understand an aspect of election administration that was previously overlooked due to the unavail- ability of data. In particular, how specific election practices and procedures impact participation among marginalized populations.

1The only evidence of scholars I was able to identify using a measure of wait time in line was Stewart (2013b).

142 As a byproduct of the county contract data on spending on voting equipment, data on vendor services was collected. This data has its own merit in election administration research. As previously mentioned, the literature on election services vendors in American election administration is woefully underdeveloped in political science. The bulk of the academic research on election services vendors comes from the discipline of computer science. Com- puter scientists, however, are interested in fundamentally different questions than social scientists. I believe that this dissertation is the first step of many towards filling a hole in social science scholarship.

143 Appendix

Figure A1: Election Equipment Vendor Market Share, 2016

144 Figure A2: Barplots of 2018 University of Texas Module CCES Variables Using in Analysis, Polling Place and Poll-Worker Performance

145 Figure A3: Barplots of 2018 University of Texas Module CCES Variables Using in Analysis, Voting and Mode of Voting

146 Figure A4: Barplots of 2018 University of Texas Module CCES Variables Using in Analysis, Voting Equipment and Problems

147 Table A1: Comparison of Dependent Variable Distributions between the Sub- set in This Analysis and the Original Data Set.

Variable Levels SPAE Subset Polling Place 1. Very Well 84.73% 84.33% Performance 2. Okay 13.28% 13.92% 3. Not Well 1.49% 1.32% 4. Terrible 0.50% 0.44% Poll-Worker 1. Excellent 72.77% 73.56% Performance 2. Good 23.53% 24.06% 3. Fair 3.23% 2.19% 4. Poor 0.46% 0.19% Wait Time in 1. Not at all 41.17% 44.50% Line 2. Less than 10 minutes 33.75% 36.04% 3. 10-30 minutes 16.67% 13.37% 4. 31 minutes - 1 hour 6.25% 4.62% 5. More than 1 hour 2.17% 1.37& Confidence in 1. Very confident 71.74% 74.22% Vote Counted 2. Somewhat confident 23.06% 21.76% as Cast 3. Not too confident 3.29% 2.74% 4. Not at all confident 1.90% 1.28%

Table A2: Comparison of Confidence in County Vote Counted as Cast between In-Person and Non-voters.

Confidence In-Person Non-Voters Non-Voters: Line Too Long Very confident 65.71% 36.24% 42.42% Somewhat confident 28.34% 37.58% 36.36% Not too confident 4.35% 21.48% 18.18% Not at all confident 1.60% 3.03% 4.70% 100% 100% 100%

148 Analysis of Proportional Odds Assumption in Table 5.3

A well known method of testing the proportional odds assumption of the proportional odds regression model is to run a Brant test. The Brant test compares k-1 binary logistic fits for dichotomized dependent variables generated from an ordered variable with k levels (For a more detailed discussion see Brant 1990. The results of a Brant test assessing the proportional odds assumption of Table 5.3 in Chapter 5 is presented in Table A3.

The violations indicated in the results presented in Table A4 suggest a violation of the proportional odds assumption. To assess the severity of the violation of the proportional odds assumption, it is necessary to inspect the model via a series of binary culmulative logistic regression models (Table A4). In the coefficient for which there was a violation (Pre-Election Confi- dence=Somewhat), the variation in coefficient estimates between cumulative logistic regression models presented in Table A4 and illustrated in Figure A5 do not appear to substantively change the interpretation presented in Chapter 5.

149 Table A3: Results of a Brant Test for the Proportional Odds Assumption

Coefficient Tested χ2 df Probability Pre-Election Confidence=Not at all 0.023 1 0.880 Pre-Election Confidence=Somewhat 22.904 1 0.000∗∗∗ Voted in 2018 0.086 1 0.770 NV Reason ID 0.063 1 0.802 NV Reason Inconvenience 0.005 1 0.946 NV Reason Lines 0.005 1 0.945 PID (7 pt) 3.498 1 0.061 Winner Score 1.186 1 0.276 Black 0.359 1 0.549 Hispanic 0.914 1 0.339 Other Race 3.293 1 0.070 Education 1.039 1 0.308 Voted in 2018/In-Person 2018 0.718 1 0.397 Effects Nested Under Voted in 2018/In-Person 2018 : Less Than OK Experience 0.0005 1 0.983 Lines > 30 mins 0.095 1 0.758 Equipment Problem 0.067 1 0.796 DRE 0.845 1 0.358 Mixed 0.0002 1 0.989 Omnibus 525.632 18 0.000∗∗∗

∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05 Note: Estimates were generated using the R package “Brant” (see Schlegel and Steenbergen 2018).

150 Table A4: Cumulative Logistic Regression Models of Post-Election Confidence in Vote Counted as Cast with , 2018

Variables 1 vs. 2&3 1&2 vs. 3 Pre-Election Confidence=Not at all −3.159∗∗∗ (0.530) −3.238∗∗∗ (0.340) Pre-Election Confidence=Somewhat −1.390∗∗ (0.500) −2.190∗∗∗ (0.222) Voted in 2018 0.044 (0.537) −0.128 (0.446) NV Reason ID −0.234 (0.686) −0.040 (0.650) NV Reason Inconvenience −0.827 (0.603) −0.874 (0.593) NV Reason Lines −2.128∗ (1.045) −2.228 (1.364) PID (7 pt) −0.174∗ (0.080) −0.020 (0.045) Winner Score 1.819∗∗∗ (0.481) 1.281∗∗∗ (0.261) Black −0.878 (0.610) −0.497 (0.359) Hispanic 0.353 (0.560) −0.226 (0.347) Other Race −1.022 (0.550) −0.041 (0.356) Education −0.060 (0.121) 0.065 (0.067) Voted in 2018/In-Person 2018 1.674∗∗ (0.540) 1.216∗∗∗ (0.273) Effects Nested Under Voted in 2018/In-Person 2018 : Less Than OK Experience −4.571∗∗∗ (1.370) −15.119 (488.116) Lines > 30 mins −0.895 (0.782) −0.642 (0.474) Equipment Problem −1.706 (1.126) −1.404∗ (0.688) DRE −0.071 (0.605) −0.626∗ (0.247) Mixed 15.142 (1,103.991) 0.156 (0.457) Intercept 3.632∗∗∗ (0.871) 0.921 (0.527) AIC 304.468 724.644 Log Likelihood −133.234 −343.322 Num. obs. 747 747 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05

151 Table A5: Partial Proportional Odds Regression Model of Post-Election Con- fidence in Vote Counted As Cast, 2018

Estimates Voted in 2018 0.04 (0.38) NV Reason ID −0.05 (0.53) NV Reason Inconvenience −0.82 (0.47) NV Reason Lines −1.75 (0.89)∗ PID (7pt) −0.07 (0.05) Winner Score 1.60 (0.27)∗∗∗ Black −0.34 (0.35) Hispanic −0.00 (0.34) Other Race −0.04 (0.37) Education −0.03 (0.07) Voted in 2018/In-Person 2018 1.16 (0.94) Effects Nested Under Voted in 2018/In-Person 2018 : Less Than OK Experience −2.14 (0.65)∗∗∗ Lines > 30 mins −0.74 (0.44) Equipment Problem 0.20 (0.75) DRE −0.57 (0.25)∗ Mixed 0.23 (0.93) 1k2 (Intercept) −2.78 (0.60)∗∗∗ 2k3 (Intercept) −0.63 (0.44) 1k2 Pre-Election Confidence=Not at all 3.10 (0.53)∗∗∗ 2k3 Pre-Election Confidence=Not at all 3.58 (0.38)∗∗∗ 1k2.Pre-Election Confidence=Somewhat 1.38 (0.52)∗∗ 2k3 Pre-Election Confidence=Somewhat 2.14 (0.24)∗∗∗ AIC 835.32 BIC 933.30 Log Likelihood -395.66 Num. obs. 635 ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05

152 Figure A5: Post-Election Confidence in Vote Counted As Cast by Pre-Election Confidence of the Partial Proportional Odds Regression Model Specification

153 Items Used in Chapter 5 from the University of Texas Module of the 2018 CCES

UTA 332 SINGLE CHOICE Poll Location Performance 2016 How well were things run at the polling place where you voted?

1. Very well - I did not see any problems at the polling place 2. Okay - I saw some minor problems, but nothing that interfered with people voting Not well - I saw some minor problems that affected the ability of a few people to vote 3. Terrible - I saw some major problems that affected the ability of many people to vote 4. I don’t remember 5. I voted by mail 6. I did not vote UTA 333 SINGLE CHOICE Poll Workers Job Performance 2016 Only ask if answer to UTA 332 was response items 1,2,3,4,5 Please rate the job performance of the poll workers at the polling place where you voted.

1. Excellent

154 2. Good 3. Fair 4. Poor 5. I don’t remember UTA 334 MULTIPLE CHOICE Factor in Non-vote 2016 Only ask if answer to UTA 332 was 7 “I did not vote” Which of the following reasons was a factor in your decision not to vote in the 2016 November general election? (Select all that apply)

1. I did not have the right kind of identification 2. The polling place hours, or location, were inconvenient 3. The line at the polls was too long

UTA 336 SINGLE CHOICE Equipment Problems 2016 Did you encounter any problems with the voting equipment or the ballot that may have interfered with your ability to cast your vote as intended?

1. Yes 2. No 3. I don’t remember 4. I did not vote

155 UTA 337 SINGLE CHOICE Personal Confidence 2016 Only ask if answer to UTA 332 was response items 1,2,3,4,5,6 How confident are you that your vote in the 2016 General Election was counted as you intended?

1. Very confident 2. Somewhat confident 3. Not at all confident UTA 338 Confidence Non-Voters 2016 Only ask if answer to UTA 332 was 7 “I did not vote” If you were to have voted in the in the 2016 General Election, how confident are you that your vote would have been counted as you intended?

1. Very confident 2. Somewhat confident 3. Not at all confident UTA 339 SINGLE CHOICE National Confidence 2016 Think about throughout the country. How confident are you

156 that votes nationwide were counted as voters intended in the November 2016 General Election?

1. Very confident 2. Somewhat confident 3. Not at all confident UTA 340 SINGLE CHOICE 2018 Confidence How confident are you that your vote will be counted as you intended if you vote in the November 2018 General Election?

1. Very confident 2. Somewhat confident 3. Not at all confident UTA 341 SINGLE CHOICE Citizen Duty Please randomize the order of the outcome categories 1 “Choice” and 2 “Duty”.

Different people feel differently about voting. For some, voting is a CHOICE. They feel free to vote or not to vote in an election depending on how they feel about the candidates and parties.

For others, voting is a DUTY. They feel that they vote in every election however they feel about the candidates and parties.

157 For you personally, voting is FIRST AND FOREMOST a:

1. Choice 2. Duty 3. (FIXED) Not sure UTA 405 SINGLE CHOICE Polling Place Performance 2018 How well were things run at the polling place where you voted?

1. Very well - I did not see any problems at the polling place 2. Okay - I saw some minor problems, but nothing that interfered with people voting Not well - I saw some minor problems that affected the ability of a few people to vote 3. Terrible - I saw some major problems that affected the ability of many people to vote 4. I don’t remember 5. I voted by mail 6. I did not vote UTA 406 SINGLE CHOICE Poll Worker Performance 2018 Only ask if answer to UTA 405 was response items 1,2,3,4,5 Please rate the job performance of the poll workers at the polling place where you voted.

158 1. Excellent 2. Good 3. Fair 4. Poor 5. I don’t remember UTA 407 MULTIPLE CHOICE Factor in Non-vote 2018 Only ask if answer to UTA 405 was 7 “I did not vote” Which of the following reasons was a factor in your decision not to vote in the 2016 November general election? (Select all that apply)

1. I did not have the right kind of identification 2. The polling place hours, or location, were inconvenient 3. The line at the polls was too long

UTA 409 SINGLE CHOICE Equipment Problems 2018 Did you encounter any problems with the voting equipment or the ballot that may have interfered with your ability to cast your vote as intended?

1. Yes 2. No 3. I don’t remember

159 4. I did not vote UTA 410 SINGLE CHOICE Personal Confidence 2018 Only ask if answer to UTA 405 was response items 1,2,3,4,5,6 How confident are you that your vote in the 2018 General Election was counted as you intended?

1. Very confident 2. Somewhat confident 3. Not at all confident UTA 411 SINGLE CHOICE Only ask if answer to UTA 405 was 7 “I did not vote” If you were to have voted in the in the 2018 General Election, how confident are you that your vote would have been counted as you intended?

1. Very confident 2. Somewhat confident 3. Not at all confident UTA 412 SINGLE CHOICE National Confidence 2018 Think about vote counting throughout the country. How confident are you

160 that votes nationwide were counted as voters intended in the November 2018 General Election?

1. Very confident 2. Somewhat confident 3. Not at all confident

161 Bibliography

Acevedo, Melissa, and Joachim I. Krueger. 2004. “Two Egocentric Sources of the Decision to Vote: The Voter’s Illusion and the Belief in Personal Relevance”. Political Psychology 25 (1): 115–134. Agresti, Alan. 2018. An Introduction to Categorical Data Analysis. Wiley Se- ries in Probability and Statistics. John Wiley & Sons. — . 2012. Analysis of Ordinal Categorical Data. Wiley Series in Probability and Statistics. John Wiley & Sons. — . 2013. Categorical Data Analysis. Wiley Series in Probability and Statis- tics. Hoboken, NJ: John Wiley Sons. Almond, Gabriel Abraham, and Sidney Verba. 2016. The Civic Culture, Politi- cal Attitudes and Democracy in Five Nations. Princeton, NJ: Princeton University Press. Alund, Natalie Neysa, Andy Humbles, and Ariana Maia Sawyer. 2016. “Elec- tion 2016: Few voting issues reported across the state”. The Tennessean. Visited on 05/10/2019. https://www.tennessean.com/story/news/ 2016/11/08/election-day-voting-machines-issues-reported- wilson-county/93428138/?hootPostID=5a773735fc5bee45317d06e88aade5a5. Alvarez, R. Michael, Lonna Rae Atkeson, and Thad E. Hall. 2013. Evaluat- ing Elections: A Handbook of Methods and Standards. New York, NY: Cambridge University Press. Alvarez, R. Michael, Thad E. Hall, and Morgan H. Llewellyn. 2008. “Are Amer- icans Confident Their Ballots Are Counted?” The Journal of Politics 70 (3): 754–766. Anselin, L., and Sergio Joseph Rey. 2014. Modern Spatial Econometrics in Practice: A Guide to GeoDa, GeoDaSpace and PySAL. Chicago, IL: GeoDa Press LLC. Anselin, Luc. 1988a. “Lagrange Multiplier Test Diagnostics for Spatial De- pendence and Spatial Heterogeneity”. Geographical Analysis 20 (1): 1– 16. — . 1995. “Local Indicators of Spatial Association - LISA”. Geographical Analysis 27 (2): 93–115.

162 — . 1988b. Spatial econometrics: methods and models. Boston, MA: Kluwer Academic Publishers. — . 2009. “Spatial Regression”. In Fotheringham and Rogerson 2009, 255– 275. Anselin, Luc, et al. 1996. “Simple diagnostic tests for spatial dependence”. Regional Science and Urban Economics 26 (1996): 77–104. Atkeson, Lonna Rae, and Kyle L. Saunders. 2007. “The Effect of Election Administration on Voter Confidence: A Local Matter?” PS: Political Science & Politics 40 (4): 655–660. Barry, Brian. 1970. Sociologists, Economists and Democracy. London: Collier- Macmillan. Bearak, Max. 2019. “In Nigeria, delayed election takes place amid polling glitches and Boko Haram attacks”. The Washington Post. Visited on 05/10/2019. https : / / www . washingtonpost . com / world / africa / in - nigeria - delayed - election - takes - place - amid - polling - glitches-and-boko-haram-attacks/2019/02/23/dde4ed04-3481- 11e9-8375-e3dcf6b68558_story.html?noredirect=on&utm_term= .2e02d8acceeb. Benington, John. 2011. “Public Value: Theory and Practice”. Chap. From Pri- vate Choice to Public Value, ed. by John Benington and Mark Moore. Basingstoke, NH: Palgrave Macmillan. Berry, William D. 1993. Understanding Regression Assumptions. Vol. 07-092. Newbury Park: Sage Publications. Bianco, William T., and David T. Canon. 2015. American Politics Today. New York, NY: W. W. Norton & Company. Bivand, Roger, Jan Hauke, and Tomasz Kossowski. 2013. “Computing the Ja- cobian in Gaussian spatial autoregressive models: An illustrated com- parison of available methods”. Geographical Analysis 45 (2): 150–179. http://www.jstatsoft.org/v63/i18/. Bivand, Roger, and Gianfranco Piras. 2015. “Comparing Implementations of Estimation Methods for Spatial Econometrics”. Journal of Statistical Software 63 (18): 1–36. https://www.jstatsoft.org/v63/i18/. Blais, Andre, and Robert Young. 1999. “Why Do People Vote? An Experiment in Rationality”. Public Choice 99 (1/2): 39–55. Bohan, George P., and Nicholas F. Homey. 1991. “Pinpointing the real cost of quality in a service company”. National Productivity Review 10 (3): 309–317.

163 Bozeman, Barry. 2007. Public values and public interest: Counterbalancing eco- nomic individualism. Washington, DC: Georgetown University Press. Brady, Henry E., and John E. McNulty. 2011. “Turning Out to Vote: The Costs of Finding and Getting to the Polling Place”. American Political Science Review 105 (1): 115–134. Brant, Rollin. 1990. “Assessing Proportionality in the Proportional Odds Model for Ordinal Logistic Regression”. Biometrics 46 (4): 1171–1178. Brennan, Geoffrey, and Loren E. Lomasky. 1993. Democracy and decision: the pure theory of electoral preference. Cambridge;New York: Cambridge University Press. Brians, Craig Leonard, and Bernard Grofman. 2001. “Election Day Registra- tion’s Effect on U.S. Voter Turnout”. Social Science Quarterly 82 (1): 170–183. — . 1999. “When registration barriers fall, who votes?: An empirical test of a rational choice model”. Public Choice 99 (1): 161–176. Brown, Robert D., and Justin Wedeking. 2006. “People Who Have Their Tick- ets But Do Not Use Them: “Motor Voter,” Registration, and Turnout Revisited”. American Politics Research 34 (4): 479–504. Burden, Barry C. 2014. “The Measure of American Elections”. Chap. Regis- tration and Voting: A View from the Top, ed. by Barry C. Burden and Charles Stewart III. New York, NY: Cambridge University Press. Burden, Barry C., and Jacob R. Neiheisel. 2013. “Election Administration and the Pure Effect of Voter Registration on Turnout”. Political Research Quarterly 66 (1): 77–90. Caulkins, Jonathan, et al. 2013. “When to make proprietary software open source”. Journal of Economic Dynamics and Control 37:1182–1194. Chang, Ting-Yueh, and Shun-Ching Horng. 2010. “Conceptualizing and Mea- suring Experience Quality: The Customer’s Perspective”. The Service Industries Journal 30 (14): 2401–2419. Cho, Sung-Jin. 2011. “An empirical model of mainframe computer invest- ment”. Journal of Applied Econometrics 26 (1): 122–150. Citrin, Jack. 1974. “Comment: The Political Relevance of Trust in Govern- ment”. American Political Science Review 68 (September): 973–88. Claassen, Ryan L., et al. 2008. “‘At Your Service’: Voter Evaluations of Poll Worker Performance”. American Politics Research 36 (4): 612–634. Cliff, Andrew David, and J. K. Ord. 1981. Spatial processes: models & appli- cations. London: Pion.

164 Clogg, Clifford C, Michael E Sobel, and Gerhard Arminger. 1995. Handbook of statistical modeling for the social and behavioral sciences. Plenum Press. Crampton, Eric, and Andrew Farrant. 2004. “Expressive and Instrumental Voting: The Scylla and Charybdis of Constitutional Political Econ- omy”. Constitutional Political Economy 15 (1). Csikszentmihalyi, Mihaly, and Judith LeFevre. 1989. “Optimal Experience in Work and Leisure”. Journal of Personality and Social Psychology 56 (5): 815–822. Dahl, Robert A. 1998. On Democracy. New Haven, CT: Yale University Press. Demos, John. 1965. “George Caleb Bingham: The Artist as Social Historian”. American Quarterly 17 (1): 218–228. Deutsch, Herb. 2005. “Public opinion’s influence on voting system technology”. Computer 38 (3): 93–95. Dieleman, Joseph L., and Tara Templin. 2014. “Random-Effects, Fixed-Effects and the within-between Specification for Clustered Data in Observa- tional Health Studies: A Simulation Study”. Ed. by Andrew R. Dalby. PLoS ONE 9 (10). Dowding, Keith. 2005. “Is it Rational to Vote? Five Types of Answer and a Suggestion”. The British Journal of Politics & International Relations 7 (3): 442–459. Downs, Anthony. 1957. An Economic Theory of Democracy. New York, NY: Harper. Eisingerich, Andreas B., and Simon J. Bell. 2008. “Perceived Service Quality and Customer Trust: Does Enhancing Customers’ Service Knowledge Matter?” Journal of Service Research 10 (3): 256–268. Engle, Robert F. 1980. “Hypothesis Testing in Spectral Regression; the La- grange Multiplier Test as a Regression Diagnostic”. In Kmenta and Ramsey 1980, 309–321. Erameh, Nicholas Idris. 2018. African Journal of Governance and Development 6 (2): 39–73. Ewald, Alec C. 2009. The Way We Vote: The Local Dimension of American Suffrage. Nashville, TN: Vanderbilt University Press. Farooq, Muhammad Arsalan, et al. 2017. “Cost of quality: Evaluating cost- quality trade-offs for inspection strategies of manufacturing processes”. International Journal of Production Economics 188:156 –166.

165 Ferejohn, John A., and Morris P. Fiorina. 1974. “The Paradox of Not Voting: A Decision Theoretic Analysis”. The American Political Science Review 68 (2): 525–536. Fitrakis, Bob, and Harvey Wasserman. 2004. “Diebold’s Political Machine”. Mother Jones (). https://www.motherjones.com/politics/2004/ 03/diebolds-political-machine/. Fotheringham, A. Stewart, and Peter Rogerson. 2009. The SAGE Handbook of Spatial Analysis. London: SAGE Publications, Ltd. Fowler, James H., and Cindy D. Kam. 2007. “Beyond the Self: Social Identity, Altruism, and Political Participation”. Journal of Politics 69 (3): 813– 827. Freedom House. 2019. Freedom in the World 2019. Washington, D.C.: Free- dom House. https://freedomhouse.org/sites/default/files/ Feb2019_FH_FITW_2019_Report_ForWeb-compressed.pdf. Geary, R. C. 1954. “The Contiguity Ratio and Statistical Mapping”. The In- corporated Statistician 5 (3): 115–146. Goodin, R. E., and K. W. S. Roberts. 1975. “The Ethical Voter”. The Amer- ican Political Science Review 69 (3): 926–928. Griffith, Daniel A. 1988. Advanced spatial statistics: special topics in the explo- ration of quantitative spatial data series. Boston, MA: Kluwer Academic Publishers. — . 1987. Spatial autocorrelation: a primer. Washington, D.C: Association of American Geographers. Grofman, Bernard. 1993. “Information, Participation, and Choice”. Chap. Is Turnout the Paradox That Ate Rational Choice Theory, ed. by Bernard Grofman, 93–103. Ann Arbor, MI: The University of Michigan Press. Gronke, Paul. 2014. “The Measure of American Elections”. Chap. Voter Con- fidence as a Metric of Election Performance, ed. by Barry C. Burden and Charles Stewart III. New York, NY: Cambridge University Press. Gronke, Paul, Eva Galanes-Rosenbaum, and Peter A. Miller. 2007. “Early Voting and Turnout”. PS: Political Science Politics 40 (4): 639–645. Gronke, Paul, et al. 2008. “Convenience Voting”. Annual Review of Political Science 11 (1): 437–455. Gr¨onroos, Christian. 2008. “Service logic revisited: who creates value? And who coˆacreates?” European Business Review 20 (4): 298–314.

166 Gr¨onroos, Christian, and Annika Ravald. 2011. “Service as business logic: im- plications for value creation and marketing”. Journal of Service Man- agement 22 (1): 5–22. Hale, Kathleen, and Christa Daryl Slaton. 2008. “Building Capacity in Election Administration: Local Responses to Complexity and Interdependence”. Public Administration Review 68 (5): 839–849. Hall, Thad E. 2018. “The Oxford Handbook of Electoral Systems”. Chap. Elec- tion Administration, ed. by Erik S. Herron, Robert J. Pekkanen, and Matthew S. Shugart. New York, NY: Oxford University Press. Hall, Thad E., J. Quin Monson, and Kelly D. Patterson. 2007. “Poll Workers and the Vitality of Democracy: An Early Assessment”. PS: Political Science Politics 40 (4): 647–654. — . 2009. “The Human Dimension of Elections: How Poll Workers Shape Public Confidence in Elections”. Political Research Quarterly 62 (3): 507–22. Hamlin, Alan, and Colin Jennings. 2011. “Expressive Political Behaviour: Foundations, Scope and Implications”. British Journal of Political Sci- ence 41 (3): 645–670. Hannah, John A., et al. 1959. Report of The United States Commission on Civil Rights. Washington, D.C.: The United States Commission on Civil Rights. Harris, Bev. 2004. Black Box Voting Ballot Tampering in the 21st Century. Renton, WA: Talion Publishing. Harris, Joseph P. 1934. Election Administration in the United States. Menasha, WI: George Banta Publishing Company. Haspel, Moshe, and H. Gibbs Knotts. 2005. “Location, Location, Location: Precinct Placement and the Costs of Voting”. The Journal of Politics 67 (2): 560–573. Heckelman, Jac. C. 1995. “The Effect of the Secret Ballot on Voter Turnout Rates”. Public Choice 82 (1/2): 107–124. Herrnson, Paul S., et al. 2008. “Voters’ Evaluations of Electronic Voting Sys- tems: Results From a Usability Field Study”. American Politics Re- search 36 (4): 580–611. Hetherington, Marc J. 1999. “The Effect of Political Trust on the Presidential Vote, 1968-96”. The American Political Science Review 93 (2): 311– 326.

167 — . 1998. “The Political Relevance of Political Trust”. The American Po- litical Science Review 92 (4): 791–808. Highton, Benjamin. 2006. “Long Lines, Voting Machine Availability, and Turnout: The Case of Franklin County, Ohio in the 2004 Presidential Election”. PS: Political Science and Politics 39 (1): 65–8. Hijmans, Robert J. 2017. raster: Geographic Data Analysis and Modeling.R package version 2.6-7. https : / / CRAN . R - project . org / package = raster. Hill, David B. 1981. “Attitude generalization and the measurement of trust in American leadership”. Political Behavior 3 (3): 257–270. Hirschbein, Ron. 1999. Voting rites : the devolution of American Politics. West- port, CT: Praeger Publishers. Holberg, Soren. 1999. “Critical Citizens: Global Support for Democratic Gov- ernance”. Chap. Down and Down We Go: Political Trust in Sweden, ed. by Pippa Norris. New York, NY: Oxford University Press Inc. Holbert, R. Lance, Heather L. LaMarre, and Kristen D. Landreville. 2009. “Fanning the Flames of a Partisan Divide: Debate Viewing, Vote Choice, and Perceptions of Vote Count Accuracy”. Communication Research 36 (2): 155–177. Iacobucci, Dawn. 2012. “Mediation analysis and categorical variables: The final frontier”. Journal of Consumer Psychology 22 (4): 582–594. Jong, Menno de, Joris van Hoof, and Jordy Gosselt. 2008. “Voters’ Perceptions of Voting Technology: Paper Ballots Versus Voting Machine With and Without Paper Audit Trail”. Social Science Computer Review 26 (4): 399–410. Kahneman, Daniel, and Amos Tversky. 1979. “Prospect Theory: An Analysis of Decision under Risk”. Econometrica 47 (2): 263–291. Key, Valdimer Orlando. 1958. Politics, parties, & pressure groups. Fourth. New York, NY: Crowell-Collier Publishing Company. Kimball, David C., and Brady Baybeck. 2013. “Are All Jurisdictions Equal Size Disparity in Election Administration”. Election Law Journal: Rules, Politics, and Policy 12 (1). Kimball, David C., et al. 2013. “Policy Views of Partisan Election Officials”. The UC Irvine Law Review 3 (3): 551–574. Kmenta, Jan. 1971. Elements of Econometrics. New York, NY: Macmillan Publishing Co.,Inc.

168 Kmenta, Jan, and James B. Ramsey. 1980. Evaluation of Econometric Models. Academic Press. Lazarsfeld, Paul Felix, and Neil W Henry. 1968. Latent structure analysis. Houghton Mifflin Co. Lee, Dwight R., and Ryan H. Murphy. 2017. “An expressive voting model of anger, hatred, harm and shame”. Public Choice 173 (3): 307–323. Lerner, Josh, and Jean Tirole. 2002. “Some Simple Economics of Open Source”. The Journal of Industrial Economics 50 (2): 197–234. Levitt, Justin. 2013. “‘Fixing That’: Lines at the Polling Place”. The Journal of Law & Politics 28 (4): 465. Long, J. Scott. 1997. Regression Models for Categorical and Limited Dependent Variables. Thousand Oaks,CA: SAGE Publications. “Long lines, issues reported at polling stations in Parkchester”. 2016. News 12 The Bronx. Visited on 05/10/2019. http://bronx.news12.com/ story / 34802953 / long - lines - issues - reported - at - polling - stations-in-parkchester. Luskin, Robert C. 1990. “Explaining Political Sophistication”. Political Be- havior 12 (4): 331–361. MacKenzie, William James Miller. 1967. Free Elections: An Elementary Text- book. London, UK: George Allen / Unwin Ltd. MacKinnon, David P., and Matthew G. Cox. 2012. “Commentary on “Medi- ation analysis and categorical variables: The final frontier” by Dawn Iacobucci”. Journal of Consumer Psychology 22:600–602. McCammon, Sarah. 2018. “Virginia Republican David Yancey Wins Tie-Breaking Drawing”. NPR. Visited on 08/09/2018. https://www.npr.org/2018/ 01/04/573504079/virginia-republican-david-yancey-wins-tie- breaking-drawing. McGerr, Michael E. 1986. The Decline of Popular Politics. New York, NY: Oxford University Press. McNulty, John E., Conor M. Dowling, and Margaret H. Ariotti. 2009. “Driving Saints to Sin: How Increasing the Difficulty of Voting Dissuades Even the Most Motivated Voters”. Political Analysis 17 (4): 435–455. Melin, Anders, and Reade Pickert. 2018. “Private Equity Controls the Gate- keepers of American Democracy”. Bloomberg (). https://www.bloomberg. com/news/articles/2018-11-03/private-equity-controls-the- gatekeepers-of-american-democracy.

169 Merriam, Charles Edward, and Harold Fotte Gosnell. 1924. Non-Voting: Causes and Methods of Control. Chicago, IL: The University of Chicago Press. Miller, Arthur H. 1974. “Political Issues and Trust in Government: 1964-1970”. The American Political Science Review 68 (3): 951–972. Millman, Jennifer. 2016. “Broken Machines, Extensive Lines, Missing Records Plague Tri-State Voters on Election Day”. NBC 4 New York. Visited on 05/10/2019. https://www.nbcnewyork.com/news/local/Polling- Problems - New - York - New - Jersey - Connecticut - Vote - Report - Issue-400380091.html. Montjoy, Robert S. 2010. “The Changing Nature... and Costs... of Election Administration”. Public Administration Review 70 (6): 867–875. — . 2008. “The Public Administration of Elections”. Public Administra- tion Review 68 (5): 788–799. Moore, Mark H. 2014. “Public Value Accounting: Establishing the Philosoph- ical Basis”. Public Administration Review 74 (4): 465–477. Moran, Patrick A.P. 1950. “Notes on Continuous Stochastic Phenomena”. Biometrika 37 (1/2): 17–23. — . 1948. “The Interpretation of Statistical Maps”. Journal of the Royal Statistical Society. Series B (Methodological) 10 (2): 243–251. Moynihan, Donald P., and Stephane Lavertu. 2012. “Cognitive Biases in Gov- erning: Technology Preferences in Election Administration”. Public Ad- ministration Review 72 (1): 68–77. Moynihan, Donald P., and Carol L. Silva. 2008. “The Administrators of Democ- racy: A Research Note on Local Election Officials”. Public Administra- tion Review 68 (5): 816–827. Murad, Havi, et al. 2003. “Small Samples and Ordered Logistic Regression: Does it Help to Collapse Categories of Outcome?” The American Statis- tician 57 (3): 155–160. Nagler, Jonathan. 1991. “The Effect of Registration Laws and Education on U.S. Voter Turnout”. The American Political Science Review 85 (4): 1393–1405. Neuman, William, and Tyler Pager. 2018. “A Citywide Paper Jam: Ballot Problems Fuel Calls for Election Reform”. The New York Times. Vis- ited on 05/10/2019. https : / / www . nytimes . com / 2018 / 11 / 07 / nyregion/voting-problems-nyc-.html.

170 Norris, Pippa. 1999. “Critical Citizens: Global Support for Democratic Gover- nance”. Chap. Institutional Explanations for Political Support, ed. by Pippa Norris. New York, NY: Oxford University Press Inc. — . 2015. Why Elections Fail. New York, NY: Cambridge University Press. — . 2014. Why electoral integrity matters. New York, NY: Cambridge Uni- versity Press. Norris, Pippa, Thomas Wynter, and Max Gromping. 2017. Perceptions of Elec- toral Integrity, (PEI-5.5). doi:10.7910/DVN/EWYTZ7. https://doi. org/10.7910/DVN/EWYTZ7. North Dakota Secretary of State. 2017. North DakotaˆaS.Theˇ Only State With- out Voter Registration. https://vip.sos.nd.gov/pdfs/portals/ votereg.pdf. O’Connell, Ann, and Xing Liu. 2011. “Model Diagnostics for Proportional and Partial Proportional Odds Models”. Journal of Modern Applied Statistical Methods 10 (1): 139–175. Odland, John. 1988. Spatial autocorrelation. Newbury Park, CA: Sage Publi- cations. Peterson, Bercedis, and Frank E. Harrell. 1990. “Partial Proportional Odds Models for Ordinal Response Variables”. Journal of the Royal Statisti- cal Society. Series C (Applied Statistics) 39 (2): 205–217. Piven, Frances Fox, and Richard A. Cloward. 1988. Why Americans don’t vote. New York, NY: Pantheon Books. Powers, Daniel, and Yu Xie. 2000. Statistical Methods for Categorical Data Analysis. San Diego, CA: Academic Press. Price, Linda L., Eric J. Arnould, and Patrick Tierney. 1995. “Going to Ex- tremes: Managing Service Encounters and Assessing Provider Perfor- mance”. Journal of Marketing 59 (2): 83–97. Rappeport, Alan, and Alexander Burns. 2016. “Donald Trump Says He Will Accept Election Outcome (‘if I Win’)”. The New York Times. https:// www.nytimes.com/2016/10/21/us/politics/campaign-election- trump-clinton.html. Rhodes R. A. W., Wanna J. 2007. “The Limits to Public Value, or Rescuing Responsible Government from the Platonic Guardians”. The Australian Journal of Public Administration 66 (4): 406–421. Riker, William H., and Peter C. Ordeshook. 1968. “A Theory of the Calculus of Voting”. The American Political Science Review 62 (1): 25–42.

171 Rolfe, Meredith. 2012. Voter Turnout: A Social Theory of Political Participa- tion. New York, NY: Cambridge University Press. Rosenstone, Steven J., and Mark Hansen. 1993. Mobilization, Participation, and Democracy in America. New York, NY: Macmillan. Rutgers, Mark R. 2015. “As Good as It Gets? On the Meaning of Public Value in the Study of Policy and Management”. The American Review of Public Administration 45 (1): 29–45. Sances, Michael W., and Charles Stewart III. 2015. “Partisanship and con- fidence in the vote count: Evidence from U.S. national elections since 2000”. Electoral Studies 40:176 –188. Sanders, Elizabeth. 1980. “On the Costs, Utilities and Simple Joys of Voting”. The Journal of Politics 42 (3): 854–863. Schattschneider, Elmer Eric. 1960. The Semisovereign People: A Realist’s View of Democracy in America. Fort Worth, TX: Harcourt Brace Jovanovich. Schieffer, Bob. 2012. “Bob Schieffer on the pleasures of voting”. CBS News. Visited on 08/09/2018. https : / / www . cbsnews . com / news / bob - schieffer-on-the-pleasures-of-voting/. Schlegel, Benjamin, and Marco Steenbergen. 2018. brant: Test for Parallel Regression Assumption. R package version 0.2-0. https://CRAN.R- project.org/package=brant. Schmitt, Bernd. 1999. Experiential Marketing: How to Get Customers to Sense, Feel, Think, Act, and Relate to Your Company and Brands. New York, NY: Free Press. Schons, Mary. 2012. Woman Suffrage. https://www.nationalgeographic. org/news/woman-suffrage/. Shepsle, K.A. 2010. Analyzing Politics: Rationality, Behavior, and Institutions. New Institutionalism in America. W.W. Norton. Sigelman, Lee, and William D. Berry. 1982. “Cost and the Calculus of Voting”. Political Behavior 4 (4): 419–428. Sinclair, Betsy, Steven S. Smith, and Patrick D. Tucker. 2018. ““It’s Largely a Rigged System”: Voter Confidence and the Winner Effect in 2016”. Political Research Quarterly 71 (4): 854–868. Skinner, Burrhus Frederic. 1976. Walden Two. New York, NY: Macmillan. Spencer, D.M., and Z.S. Markovits. 2010. “Long Lines at Polling Stations? Observations from an Election Day Field Study”. Election Law Journal 9 (1): 3–17.

172 Stein, Robert M., and Gregg Vonnahme. 2014. “The Measure of American Elections”. Chap. Polling Place Practices and the Voting Experience, ed. by Barry C. Burden and Charles Stewart III. New York, NY: Cam- bridge University Press. Stewart, Charles, III. 2013a. 2012 Survey of the Performance of American Elections. — . 2015. 2014 Survey of the Performance of American Elections, Regular Study. — . 2017a. 2016 Survey of the Performance of American Elections. — . 2017b. 2016 Survey of the Performance of American Elections. — . 2013b. “Waiting to Vote in 2012”. The Journal of Law and Politics 28:439–463. Stewart, Charles, III, Daron R. Shaw, and Stephen Ansolabehere. 2013. 2013 Survey of US Local Election Officials. The Help America Vote Act and Election Administration. 2015. Congressional Research Service. Thoreau, Henry David. 1903. On the Duty of Civil Disobedience. Simple life series. Simple Life Press. Tingley, Dustin, et al. 2014. “mediation: R Package for Causal Mediation Analysis”. Journal of Statistical Software 59 (5): 1–38. http://www. jstatsoft.org/v59/i05/. Tullock, Gordon. 1967. Toward a Mathematics of Politics. Ann Arbor, MI: University of Michigan Press. Uhlaner, Carole Jean. 1993. “Information, Participation, and Choice”. Chap. What the Downsian Voter Weighs: A Reassessment of the Costs and Bene- fits of Action, ed. by Bernard Grofman, 67–79. Ann Arbor, MI: The University of Michigan Press. Verba, Sidney, and Norman H. Nie. 1972. Participation in America: Politi- cal Democracy and Social Equality. New York, NY: Harper & Row, Publishers, Inc. Verba, Sidney, Kay Lehman Schlozman, and Henry E. Brady. 1995. Voice and Equality: Civic Voluntarism in American Politics. Cambridge, MA: Harvard University Press. Vozzella, Laura. 2013. “Herring wins Virginia attorney general race, elec- tions board announces”. The Washington Post. Visited on 05/10/2019. https://www.washingtonpost.com/local/virginia- politics/ herring - wins - virginia - attorney - general - race - elections -

173 board-announces/2013/11/25/7b661082-55e7-11e3-835d-e7173847c7cc_ story.html?noredirect=on&utm_term=.9b4f9b6e5c33. Wang, Tova Andrea. 2012. The Politics of Voter Suppression: Defending and Expanding Americans’ Right to Vote. Ithaca, NY: Cornell University Press. Weatherford, M. S. 1992. “Measuring Political Legitimacy”. The American Political Science Review 86 (1): 149–166. Williams, Richard. 2016. “Understanding and interpreting generalized ordered logit models”. The Journal of Mathematical Sociology 40 (1): 7–20. “Wilson Co. Using Device To Assure Voter Confidence”. 2016. NewsChannel 5 Nashville. Visited on 05/10/2019. https://www.newschannel5.com/ news/wilson-co-using-device-to-assure-voter-confidence. Wofford, Ben. 2016. “How to hack an election in 7 minutes”. Politico (). https: //www.politico.com/magazine/story/2016/08/2016-elections- russia- hack- how- to- hack- an- election- in- seven- minutes- 214144.

174