
The Electoral College : A Majority Efficiency Analysis. Vincent R. Merlin,∗ Thomas G. Senn´ey Preliminary Version The 29th of January 2008 One of the main topics of voting theory is the assessment of paradox proba- bilities in order to compare different voting rules. These computations are based upon assumptions on the probability distribution of voters' preferences. The two principal hypothesis are the Impartial Culture assumption which stipulates that every voter picks his preference among a set of uniformly distributed orderings, and the Impartial Anonymous Culture which states that every voting situation has the same probability to occur. Their main disadvantage is that they are theoretical a priori models and they do not necessarily match the reality. In this paper we propose to calibrate these usual models in order to assess the probability of the referendum paradox, that is, the probability that the popular winner does not obtain a majority of delegates. Indeed, one of the motivation of this paper is to take part of the debate resulting from the criticisms instated by authors such as Gelman et alii [11, 12] and Regenwetter [26]. In this way, we provide a serie of estimations for the referendum paradox probability in the Electoral College according to a continuum of probabilistic models, which more and more match the real data. The second motivation is to use these different probability assumptions in order to study the current seat allocation method in comparison with apportionment methods. Keywords : Voting, IAC, IC, Referendum Paradox, Majority Efficiency, Electoral College. JEL classification : D71. ∗corresponding author : CREM, Universit´ede Caen, 14032 Caen Cedex, France (email: [email protected], tel: 02 31 56 62 49) yCREM, Universit´ede Caen, 14032 Caen Cedex, France (email: [email protected]) 1 1 Introduction During Summer 2000, just before the US presidential elections, at APSA's annual Meeting, numerous political scientists predicted a Democratic (Al Gore) victory by upwards of 6 percentage points. In November, the election outcome was a surprisingly close election. With 48.4% of the popular vote, A. Gore only won 21 States among 51, for a total of 267 electors in the Electoral College, while G. W. Bush got 271 electors with less support from the popular vote. This paradox is known in Social Choice literature as the referendum paradox (RP, see Nurmi [21], Saul [28], Feix, Lepelley, Merlin and Rouet [9]). The U.S. presidential elections displayed the paradox in 1876, 1888 and 2000, and close outcomes (the margin between the top two candidates is less than about 4%) have been observed 11 times since the bipartism was established1 (see Table 1). Table 1: The eleven closest U.S. presidential elections Year Popular Vote (%) Electoral Vote Margin (%) RP Dem. Rep. Dem. Rep. 1876 51.0 48.0 184 185 3.0 Yes 1880 48.3 48.2 214 155 0.1 No 1884 48.5 48.3 219 182 0.2 No 1888 48.6 47.8 168 233 0.8 Yes 1892 46.1 43.0 277 145 3.1 No 1916 49.2 46.1 277 254 3.1 No 1960 49.8 49.6 303 219 0.2 No 1968 42.7 43.4 191 301 0.7 No 1976 50.1 48.0 297 240 2.1 No 2000 48.4 47.9 267 271 0.5 Yes 2004 48.3 50.7 251 286 2.4 No This paradox belongs to a class of voting paradoxes characterized by the variability of choice sets in counterintuitive ways when we aggregate differently the same electoral data. Two other famous paradoxes belonging to this class are the Ostrogorski's paradox (see Ostrogorski [23], Nurmi [21, 22], Laffond and Lain´e[19], Saari and Sieberg [27]) and the Anscombe's paradox (see Ascombe [1], Wagner and Carler [29, 30], Saul [28], Nurmi [21]). The referendum paradox calls into question the institution of the consultative referendum in a representa- tive democracy. In a consultative referendum, the issues which are voted upon in the referendum are finally decided by the parliament. More generally the referendum paradox occurs when, for example, a majority of voters supports an 1The US presidential elections records the paradox once before, in 1824. Three candidates received Electoral College votes but as no presidential candidate received an electoral majority, the election was determined by the House of Representatives. John Quincy Adams won the vote with the support of 13 State delegations though he was not the popular vote winner. 2 issue while the members of parliament, or here the Electoral College, reverses the majority decision. The first motivation of this work is to answer the following question : Is the U.S. presidential election system a good two-tiers voting system ? Indeed, one of the most important question in a federal union is to design a good allocation of the mandates among the union members (here States). There exist several normative criteria to solve this problem. Most of them belong to power index literature (see Penrose [24, 25], Banzhaf [3], Felsenthal and Machover [8]). Re- cently, new ones, based on the total utility maximization, have been proposed in the literature (see Felsenthal and Machover [8], Barber`aand Jackson [6], Beis- bart, Bowens and Hartmann [7], Kirsch [17]). Close to this idea, Feix, Lepelley, Merlin and Rouet [9] suggested the concept of majority efficiency : the majority efficieny of a method corresponds to referendum paradox probability usind this model2. Ultimately, we want to maximize the probability that the candidate elected through a two-tiers voting rule reveals the popular choice. Then the following question is : how to compute as fairly as possible the referendum paradox probability ? In fact, in social choice theory, it is usual to set some a priori assumptions on the behavior of the voters. They permit to compare voting systems from a neutral point of view. They are not contingent to history : a voting system may be good for a certain voting pattern, at time t, but the time evolution may make it inadequate in the future. Hence, the probabilistic models used throughout the literature convey some notion of impartiality : they assume that each party is equally likely to win, and one can simulate the votes under different probability models without any reference to a precise political context. The two models often introduced in the Social Choice literature are the Impartial Culture (IC) and the Impartial Anonymous Culture (IAC) assumptions. With IC, all profiles of preference are equally likely. In our case of two candidates, it assumes that each voter selects a party with equal probability. When the number of voters in a State i is sufficiently large, the distribution of the votes follows a normal law. The IAC assumption considers that all voting situations have the same probability of occurrence. In our case, every final distribution of the votes between the two candidates is equally likely to occur. The main advantage of these two models is the notion of veil of ignorance they convey : without data they can assess the frequency numerous of paradoxes or evaluate the influence of voters via a power index, and enable the comparison of different voting rules from a normative point of view. However an important problem has recently been brought out. Electoral data give more information and do not match a priori models. Thus, any rec- ommendation based on a priori voting models may clash with reality. Two main works are at the origin of this critic. In their recent book, Regenwetter, Grofman, Marley, Tsetlin [26] develop conceptual, mathematical, methodolog- 2This concept is equivalent to the concept of Condorcet efficiency developed by Fishburn and Gehrlein for the analysis scoring rule. For three candidates and more, the Condorcet efficiency is the conditional probability that a voting rule selects the Condorcet winner, given that such a candidate exists. In our case, with two candidates only, a majority winner always exists. 3 ical and empirical foundations of behavioral social choice research. This last notion encompasses two major interconnected paradigms : the development of behavioral choice theory and the evaluation of that theory with empirical data on social choice behavior. In their book, studies of survey data do not at all look like random samples from a uniform distribution, i.e., from an Impartial Cul- ture. This reinforces the view that the Impartial Culture is unrealistic and non descriptive of empirical data on social choice behavior. Moreover, they show the propensity of IC model to overevaluate the frequency of majority cycles, which turn out to be very rare with random sampling. The second work has been made by Gelman, Katz and Bafumi [11] on voting power indices. They showed that the Straffin’s Independence assumption (game theory's equivalent of Impartial Culture) had to be rejected for the elections of the senators, the representatives and president in the United States; similar conclusions are drawn from the elec- toral data collected over Europe. This special case of model does not fit any of the empirical data they have examined. They indicate that voting power evaluations are based on an empirically falsified and theoretically unjustified model. For them, a more realistic and reasonable model could be affected by local, regional and national effects. The IAC model is not tested in these two preceeding works but we can think that it is not enough realistic too. Then, the resulting question is : if we still want to use a priori models (be- cause they are relatively easy to implement, can be interpreted normatively and give a neutral point of view for the evaluation of voting rules), which adap- tations are possible to make them more realistic ? Theoretical answers have been given by the literature (see Nurmi [20, 21, 22], Feix, Lepelley, Merlin and Rouet [9, 10]).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages26 Page
-
File Size-