arXiv:2102.02785v2 [cs.AI] 9 Mar 2021 ihtrigteidvda iayjdmnso ru fag of group a of judgments binary individual the turning with iheaiainoe 5] ntecneto ugetaggre judgment of context the In [55]. ones egalitarian with aiu icpie,lk hlspy cnmc,lglth legal economics, philosophy, like disciplines, various C eeec Format: Reference ACM ugetageaini nae fsca hieter conc theory choice social of area an is aggregation Judgment hsprpcie nrdcn oeeaiainpito v of point egalitarian more cha a we introducing paper this perspective, In this majority. the of trad will has the judgment considered been collective appro such utilitarian “ideal" the an choice, Following whole. social a as group the resent aggre judgment of ( purpose methods The [39]. intelligence artificial se making decision collective for foundations aggreg the judgment provides framework, applicable [2 widely judgment and collective flexible a a into issues related logically over es(AA 01,U nrs,A oé .Dgu,A Lomusci A. Dignum, F. Nowé, Age Online A. Autonomous Endriss, U. on 2021), Conference (AAMAS International tems 20th the of Proc. INTRODUCTION 1 202 2021 (AAMAS 3–7, Systems May Multiagent and Agents Autonomous on ence 2 Terzopoulou. In Zoi Aggregation. and Judgment Slavkovik, Egalitarian Marija Haan, de Ronald Botan, Sirin Complexity Computational Manipulation, gic Egalitariani Aggregation, Judgment Theory, Choice Social KEYWORDS rule. that for manipulation strategic outcome and both mination concerning results complexity present our we from sis; Fi stems rule large. aggregation at judgment theory egalitarian not novel choice several ex social and then from axioms We strategyproofness these merging. of between belief relationship existin of the framework of plore pertinent context the the in within ioms these situate c in and axioms egalitarianism aggregation introduce of We interpretations gap. classical that two filling turing towards steps first the issue po interconnected logically aggregation—a aggregating judgment for framework in a attention egalitarian gen little the to received Yet, respect has elections. deciding with committee when balance in guaranteeing cake ethnicity to a or it, of divide share to equal how an gets everyone ensuring principle area egalitarian of many Applications in theory. choice role cial central a play considerations Egalitarian ABSTRACT es(w.faa.r) l ihsreserved. rights All (www.ifaamas.org). tems neooi hoy tltra prahsaeotncontr often are approaches utilitarian theory, economic In 01ItrainlFudto o uooosAet and Agents Autonomous for Foundation International 2021 © . rules nvriyo mtra,TeNetherlands The Amsterdam, of University FAA,1 pages. 14 IFAAMAS, , st n hs olciejdmnsta etrrep- better that judgments collective those find to is ) nvriyo egn Norway Bergen, of University [email protected] aiaSlavkovik Marija [email protected] ii Botan Sirin gltra ugetAggregation Judgment Egalitarian rc fte2t nentoa Confer- International 20th the of Proc. t n utaetSys- Multiagent and nts es) a –,2021, 3–7, May (eds.), o ag from range s utaetSys- Multiagent judgment oy and eory, .W make We s. ] Being 3]. m Strate- sm, ) Online, 1), itionally in ttings pproach iew. al,a nally, fso- of s gation, werful gation c in ach llenge analy- erned deter- ation asted ax- g ions ents der 021. ap- - eta ocr nmn ra fsca hieter,o wh of theory, choice social of areas many in concern central A hc h ece ilhv orslei h isdsge a disagree kids the resolve—if to have will teacher the which they that office an in themselves find will taste diverting with s speeti iest 2]adi rprinlreprese proportional in notions. and 20] [2, [22] diversity in present is ism s sta togeog aoiycncne u h iw o views occasions. the several out in cancel questionable can is majority which majorit minority, enough of strong drawbacks a that the is of ism One off 57]. always [4, not solutions do propriate approaches utilitarian where aggregatio [15], the is choices cars) self-driving of implicati construction the practical in with systems multiagent aggregatio concerns judgment also egalitarian of rapidly domain A application consideration. ing equal a enjoy agents among that satisfaction ensure colle and distributed the whether equally account achieves into outcome take must rule egalitarian an hieter.In theory. choice gnsaewt h olcieotoerltv oec othe each to relative outcome collective the with are so-called agents the behind idea the norance captures first th The the by justice. inspired aggregation, judgment to egalitarianism l isivle,s htn xr eso scetddet e to due created is tension extra no that so ( equally involved, that kids toys select all to is with goal playground teacher’s the existing case, the that ne complement teacher to kindergarten toys a what which then, decide in But situation dissatisfied. similar a strongly sider tha is ensure possible—no-one to be as would much strategy mem viable more few arguably a an test; then satisfy, pop-ar to of tries consist president largely union the the that clash of thusiasts to members due the selected, If simultaneously style). combinations be the can painting on that which constraints paintings on some opinions imposing an their (perhaps office for buy union’s colleagues the her of asks decoration she the for budget some secured msn h hr ftewrto gn 1]a ela elimina as well as [11] In agent [32]. off agents between worst envy the ing of Section share in the ar discuss imising study further they we properties as In ours, main than two weaker the egalita logically natu of although interpretation The paper, the [29]. this with al. line in in et is Everaere axioms by their of studied been have operators nrdcstofnaetlpoete as nw as known (also properties fundamental two introduces hntetahrmyedu hoigty htnn fte re- them of none that toys choosing likes. up ally end may teacher the then notntl,eaiaincnieain fe oea a at come often considerations egalitarian Unfortunately, u xoscoeymro rprisi te ra fsocia of areas other in properties mirror closely axioms Our o xml,spoeta h rsdn fasuetuinha union student a of president the that suppose example, For nodrt omlycpueseaislk h bv,ti p this above, the like scenarios capture formally to order In eoreallocation resource nvriyo mtra,TeNetherlands The Amsterdam, of University nvriyo mtra,TeNetherlands The Amsterdam, of University fRws[0,wietescn pasaothwhappy how about speaks second the while [60], Rawls of eifmerging belief [email protected] [email protected] o Terzopoulou Zoi oadd Haan de Ronald areshsbe nepee oha max- as both interpreted been has fairness , gltra xosadmerging and axioms egalitarian , utwne elections multiwinner egalitarian- , axioms fmoral of n dis)satisfy elo ig- of veil n like ons rianism ntation oyof eory (that n r. d to eds arian- grow- rap- er son es gents ctive t—as en- t aper bers cost. con- nvy, to s of ) In . ich 3.1. lot, de- a f of re d t- e s l judgement aggregation does not constitute an exception, is that agenda Φ, to a nonempty set  (P ) of collective judgments in J(Φ). agents may have incentives to manipulate, i.e., to misrepresent Note that a judgment aggregation rule is defined over groups and their judgments aiming for a more preferred outcome [18]. Fre- agendas of variable size, and may return several, tied, collective quently, it is impossible to simultaneously be fair and avoid strate- judgments. gic manipulation. For both variants of fairness in resource alloca- The agents that participate in a judgment aggregation scenario tion, rules satisfying them usually are susceptible to strategic ma- will naturally have preferences over the outcome produced by the nipulation [1, 9, 13, 54]. The same type of results have recently aggregation rule. First, given an agent 8’s truthful judgment 8, we been obtained for multiwinner elections [49, 58]. It is not easy to need to determine when agent 8 would prefer a judgment over be egalitarian while disincentivising agents from taking advantage a different judgment ′. The most prevalent type of such prefer- of it. ences considered in the judgment aggregation literature is that of Inspired by notions of manipulation stemming from voting the- Hamming distance preferences [6–8, 64]. ory, we explore how our egalitarian axioms affect the agents’ strate- The Hamming distance between two judgments and ′ equals gic behaviour within judgment aggregation. Our most important the number of issues on which these judgments disagree—concretely, ′ ′ result in this vein is showing that the two properties of egalitari- it is defined as  ( , ) = Íi ∈Φ | (i)− (i)|, where (i) denotes anism defined in this paper clearly differ in terms of strategyproof- the binary value in the position ofi in . For example,  (100, 111) = ness. 2. Then, the (weak, and analogously strict) preference of agent 8 ′ Our axioms give rise to two concrete egalitarian rules—one that over judgments is defined by the relation 8 (where 8 means has been previously studied, and one that is new to the literature. that 8’s from is higher than that from ′): For the latter, we are interested in exploring how computationally ′ ′ 8 if and only if  ( 8, ) ≤  ( 8, ). complex its use is in the worst-case scenario. This kind of question, first addressed by Endriss et al. [28], is regularly asked in the lit- But an aggregation rule often outputs more than one judgment, erature of judgment aggregation [5, 25, 51]. As Endriss et al. [26] and thus we also need to determine agents’ preferences over sets 1 wrote recently, the problem of determining the collective outcome of judgments. We define two requirements guaranteeing that the of a given judgment aggregation rule is “the most fundamental al- preferences of the agents over sets of judgments are consistent gorithmic challenge in this context”. with their preferences over single judgments. To that end, let ˚ 8 The remainder of this paper is organised as follows. Section 2 (with strict part ≻˚ 8 ) denote agent 8’s preferences over sets -,. ⊆ reviews the basic model of judgment aggregation, while Section 3 J(Φ). We require that ˚ 8 is related to 8 as follows: ′ ′ ′ introduces our two original axioms of egalitarianism and the rules • 8 if and only if { } ˚ 8 { }, for any , ∈ J(Φ); ′ they induce. Section 4 analyses the relationship between egalitar- • - ≻˚ 8 . implies that there exist some ∈ - and ∈ . such ′ ′ ianism and strategic manipulation in judgment aggregation, and that ≻8 and { , } * - ∩ . . Section 5 focuses on relevant computational aspects: although the The above conditions hold for almost all well-known preference general problems of outcome determination and of strategic ma- extensions. For example, they hold for the pessimistic preference nipulation are proven to be very difficult, we propose a way to (- ≻pess . if and only if there exists ′ ∈ . such that ≻ ′ for confront them with the tools of Answer Set Programming [36]. all ∈ - ) and the optimistic preference (- ≻opt . if and only if there exists ∈ - such that ≻ ′ for all ′ ∈ . ) of Duggan 2 BASIC MODEL and Schwartz [19], as well as the preference extensions of Gärden- fors [33] and Kelly [43]. The results provided in this paper abstract Our framework relies on the standard formula-based model of judg- away from specific preference extensions. ment aggregation [52], but for simplicity we also use notation com- monly employed in binary aggregation [38]. 3 EGALITARIAN AXIOMS AND RULES Let N denote the (countably infinite) set of all agents that can potentially participate in a judgment aggregation setting. In every This section focuses on two axioms of egalitarianism in judgment specific such setting, a finite set of agents # ⊂ N of size = ≥ 2 aggregation. We examine them in relation to each other and to ex- express judgments on a finite and nonempty set of issues (formu- isting properties from belief merging, as well as to the standard ma- jority property defined below. Most of the well-known judgment las in propositional logic) Φ = {i1,...,i< }, called the agenda. Φ < Φ aggregation rules return the majority opinion, when that opinion J( )⊆{0, 1} denotes the set of all admissible opinions on . 2 Then, a judgment is a vector in J(Φ), with 1 (0) in position : is logically consistent [24]. Let <(P ) be the judgment that accepts exactly those issues ac- meaning that the issue i is accepted (rejected). is the antipodal : cepted by a strict majority of agents in P . A rule  is majoritarian judgment of : for all i ∈ Φ, i is accepted in if and only if it is when for all profiles P , <(P )∈J(Φ) implies that  (P ) = { }. rejected in . Our first axiom with an egalitarian flavour is the maximin prop- A profile P = ( ,... )∈J(Φ)= is a vector of individual judg- 1 = erty, suggesting that we should aim at maximising the utility of ments, one for each agent in a group # . We write P ′ = P when −8 those agents that will be worst off in the outcome. Assuming that the profiles P and P ′ are the same, besides the judgment of agent 8. We write P−8 to denote the profile P with agent 8’s judgment re- 1Various approaches have been taken within the area of moved, and (P, ) ∈ J(Φ)=+1 to denote the profile P with judg- in order to extend preferences over objects to preferences over sets of objects —see Barberà et al. [3] for a review. ment added. A judgment aggregation rule  is a function that 2A central problem in judgment aggregation concerns the fact that the issue-wise maps every possible profile P ∈ J(Φ)=, for every group # and majority is not always logically consistent [52]. everyone submits their truthful judgment during the aggregation every profile P , the outcomes obtained by any other rule in that process, this means that we should try to minimise the distance of class are always outcomes of  too.4 the agents that are furthest away from the outcome. Formally: The maximal rule satisfying the maximin property is the rule = ◮ Arule  satisfies the maximin property if for all profiles P ∈ MaxHam (see, e.g., Lang et al., 2011). For all profiles P ∈ J(Φ) , Φ = P J( ) and judgments ∈  ( ) there do not exist judgment MaxHam(P ) = argmin max  ( 8, ). ′ ∈ J(Φ) and agent 9 ∈ # such that ∈J (Φ) 8 ∈# ′  ( 8, ) <  ( 9, ) for all 8 ∈ # . Analogously, we define a rule new to the judgment aggregation literature, which is the maximal one satisfying the equity property. Although the maximin property is quite convincing, there are set- For all profiles P ∈ J(Φ)=, tings like those motivated in the Introduction where it does not offer sufficient egalitarian guarantees. We thus consider a differ- MaxEq(P ) = argmin max | ( 8, )−  ( 9, )|. Φ 8,9 ∈# ent property next, which we call the equity property. This axiom ∈J ( ) requires that the gaps in the agents’ satisfaction be minimised. In To better understand these rules, consider an agenda with six is- other words, no two agents should find themselves in very differ- sues: ?,@,A ≡ ? ∧ @, and their negations. Suppose that there are ent distances with respect to the collective outcome. Formally: only two agents in a profile P , holding judgments 1 = (111) and ◮ A rule  satisfies the equity property if for all profiles P ∈ 2 = (010). Then, we have that MaxHam(P ) = {(111), (010)}, J= and judgments ∈  (P ), there do not exist judgment while MaxEq = {(000), (100)}. In this example, the difference in ′ ∈ J(Φ) and agents 8′, 9 ′ ∈ # such that spirit between the two rules of our interest is evident. Although the MaxHam rule is able to fully satisfy exactly one of the agents ′ ′ < | ( 8, )−  ( 9, )| | ( 8′, )−  ( 9′, )| for all 8, 9 ∈ # . without causing much harm to the other, it still creates greater un- No rule that satisfies either the maximin- or equity property can balance than the MaxEq rule, which ensures that the two agents are be majoritarian.3 As an illustration, in a profile of only two agents equally happy with the outcome (under Hamming-distance prefer- who disagree on some issues, any egalitarian rule will try to reach a ences). In that sense, MaxEq is better suited for a group of agents compromise, and this compromise will not be affected if any agents that do not want any of them to feel particularly put upon, while holding one of the two initial judgments are added to the profile— MaxHam seems more desirable when a minimum level of happi- in contrast, a majoritarian rule will simply conform to the crowd. ness is asked for. Proposition 1 shows that it is also impossible for the maximin MaxHam generalises minimax approval voting [10], which is property and the equity property to simultaneously hold. There- the special case without logical constraint on the judgments, mean- fore, we have established the logical independence of all three ax- ing agents may approve any subset of issues. Brams et al. [10] show ioms discussed so far: maximin, equity, and majoritarianism. that MaxHam remains manipulable in this special case. As finding the outcome of minimax is computationally hard, Caragiannis et al. Proposition 1. No judgment aggregation rule can satisfy both the [12] provide approximation algorithms that circumvent this prob- maximin property and the equity property. lem. They also demonstrate the interplay between manipulability Proof. Take an agenda Φ where J(Φ) consists of the nodes in and lower bounds for the approximation algorithm—establishing strategyproofness results for approximations of minimax. the graph below and consider the profile P = ( 1, 2). Each edge is labelled with the Hamming distance between the judgments. 3.1 Relations with Egalitarian Belief Merging : 110000 1 4 A framework closely related to ours is that of belief merging [45], 1 which is concerned with how to aggregate several (possibly incon- : 010000 ′ : 111111 sistent) sets of beliefs into one consistent belief set.5 Egalitarian 3 belief merging is studied by Everaere et al. [29], who examine in- 4 2 : 001100 terpretations of the Sen-Hammond equity condition [63] and the Pigou-Dalton transfer principle [17]—two properties that are logi- Every aggregation rule satisfying the maximin property will return cally incomparable.6 We situate our egalitarian axioms within the { }, as this judgment maximises the utility of the worst off agent— context of these egalitarian axioms from belief merging; we refor- in this case, agent 2. However every rule satisfying the equity prop- mulate these axioms into our framework. ′ erty will return { }, as this judgment minimises the difference in ◮ Fix an arbitrary profile P , agents 8, 9, and any three judgment utility between the best off and worst off agents. Thus, there is no sets , ′ ∈ J(Φ). An aggregation rule  satisfies the Sen- rule that can satisfy the two properties at the same time.  Hammond equity property if whenever ′ ′ From Proposition 1, we also know now that the two properties of  ( 8, ) <  ( 8, ) <  ( 9, ) <  ( 9, ) egalitarianism generate two disjoint classes of aggregation rules. 4Of course, several natural refinements of these rules can be defrined, with re- In particular, in this paper we focus on the maximal rule that meets spect to various other axiomatic properties that we may find desirable. Identifying each property: a rule  is the maximal one of a given class if, for and studying such rules is an interesting direction for future research. 5We refer to Everaere et al. [30] for a detailed comparison of the two frameworks. 3This includes popular rules like the median rule [56]—known under a num- 6Another egalitarian property in belief merging is the arbitration postulate. We ber of other names, notably distance-based rule [59], Kemeny rule [24], and prototype do not go into detail on this postulate, but refer the reader to Konieczny and Pérez rule [53]. [45]. ′ ′ and  ( 8′, ) =  ( 8′, ) for all other agents 8 ∈ # \{8, 9}, Equity Pigou-Dalton then ∈  (P ) implies ′ ∈  (P ).

Proposition 2. If a rule satisfies either the maximin property or the equity property, then it will satisfy the Sen-Hammond equity prop- erty. Maximin Sen-Hammond

Proof (sketch). Let P = ( 8, 9 ) beaprofilesuchthat  ( , 8) < ′ ′ ′ Figure 1: Dashed lines denote incompatibility, dotted lines ( ) < ( ) < ( ) ( ′ ) = ( ′ )  , 8  , 9  , 9 , and  8 ,  8 , for incomparability, and arrows implication relations. all other agents 8′ ∈ # \ {8, 9}. Suppose  satisfies the equity ′ . property—if there is some agent 8 such that | ( 8, )− ( 8′, )| > ′ | ( 8, )−  ( 9, )|, then ∈  (P ) if and only if ∈  (P ), as the maximal difference in distance will be the same for the two judg- 4 STRATEGIC MANIPULATION ments. If this is not the case, then agents 8 and 9 determine the This section provides an account of strategic manipulation with outcome regarding and ′ so clearly ∈  (P ) implies ′ ∈  (P ). respect to the egalitarian axioms defined in Section 3. We start off The argument for other cases proceeds similarly. with presenting the most general notion of strategic manipulation If  satisfies the maximin property, then a similar argument tells in judgment aggregation, introduced by Dietrich and List [18].8 We us that if membership of and ′ in the outcome is determined by assume Hamming preferences throughout this section. an agent other than 8 or 9, we will either have both or neither. If 8, and 9 are the determining factor then ∈  (P ) implies ′ ∈ Definition 1. A rule  is susceptible to manipulation by agent 8 ′ = ′ ˚  (P ).  in profile P , if there exists a profile P −8 P such that  (P ) ≻8  (P ). We say that  is strategyproof in case  is not manipulable by any = ◮ Given a profile P = ( 1,..., =) and agents 8 and 9 such that: agent 8 ∈ # in any profile P ∈ J(Φ) . ′ ′ –  ( 8, ) <  ( 8, ) ≤  ( 9, ) <  ( 9, ), Proposition 4 shows an important fact: In judgment aggregation, ′ ′ 9 –  ( 8, )−  ( 8, ) =  ( 9, )−  ( 9, ), and egalitarianism is incompatible with strategyproofness. ′ ∗ –  ( 8∗, ) =  ( 8∗, ) for all other agents 8 ∈ # \{8, 9},  satisfies the Pigou-Dalton transfer principle if ′ ∈ Proposition 4. If an aggregation rule is strategyproof, it cannot  (P ) implies ∉  (P ). satisfy the maximin property or the equity property. We refer to these axioms simply as Sen-Hammond, and Pigou-Dalton. Proof. We show the contrapositive. Let Φ be an agenda such Note that Pigou-Dalton is also a weaker version of our equity prop- that J(Φ) = {000000, 110000, 111000, 111111}. Consider the fol- erty, as it stipulates that the difference between utility in agents lowing two profiles P (left) and P ′ (right). should be lessened under certain conditions, while the equity prop- ′ 8 111000 8 111111 erty always aims to minimise this distance. ′ 9 000000 9 000000 While we can find a rule that satisfies both the equity property ′ and a weakening of the maximin property, Sen-Hammond, we can-  (P ) 110000  (P ) 111000 not do the same by weakening the equity property. In profile P , both the maximin and the equity properties prescribe that 110000 should be returned as the single outcome, while in pro- Proposition 3. No judgment aggregation rule can satisfy both the ′ ′ = ′ file P they agree on 111000. Because P (P−8, 8 ), and 111000 ≻8 maximin property and Pigou-Dalton. 110000, this is a successful manipulation. Thus, if  satisfies the maximin or the equity property, it fails strategyproofness.  ′ Proof. Consider the domain J(Φ) = { 1, 2, 3, , } with the following Hamming distances between judgment sets.7 Strategyproofness according to Definition 1 is a strong require- ment, which many known rules fail [8]. We investigate two more nuanced notions of strategyproofness that are novel to judgment ′ 1 2 3 aggregation, yet have familiar counterparts in voting theory. 240 4 8 1 First, no-show manipulation happens when an agent can achieve 64 4 0 10 2 a preferable outcome simply by not submitting any judgment, in- 6 6 8 10 0 3 stead of reporting a truthful or an untruthful one. ′ Let P = ( 1, 2, 3). If  satisfies the maximin property, { , } ⊆ Definition 2. A rule  is susceptible to no-show manipulation  (P ), as we can see from the grey cells. This means Pigou-Dalton by agent 8 in profile P if  (P−8) ≻˚ 8  (P ). is violated in this profile, as ′ ∈  (P ) should imply ∉  (P ).  We say that  satisfies participation if it is not susceptible to no- show manipulation by any agent 8 ∈ # in any profile.10 We summarise the observations of this section in Figure 1. 8The original definition of Dietrich and List [18] concerned single-judgment col- lective outcomes, and a type of preferences that covers Hamming-distance ones. 7One such domain would be the following, where = 00000000111, ′ = 9This in in line with Brams et al.’s work on the minimaxrule in approval voting. 10 00000001110, 1 = 00000010011, 2 = 00000111000, and 3 = 11111001111. cf. the no-show paradox in voting [31]. Second, antipodal strategyproofness poses another barrier against Lemma 1. For judgment sets , ′ and ′′:  ( , ′) >  ( , ′′), if manipulation, by stipulating that an agent cannot change the out- and only if  ( , ′) <  ( , ′′). come towards a better one for herself by reporting a totally un- ′ Φ ′ = ′ truthful judgment. This is a strictly weaker requirement than full Proof. For judgment sets , ∈ J( ),  ( , ) < − ( , ). ′ > ′′ ′ ′ < strategyproofness, serving as a protection against excessive lying. Suppose  ( , )  ( , ). Then  ( , ) = < −  ( , ) < −  ( , ′′) =  ( , ′′). The other direction is analogous.  Definition 3. A rule  is susceptible to antipodal manipulation Theorem 1. For decisive preferences over sets of judgments, partic- by agent 8 in profile P if  (P−8, 8) ≻˚ 8  (P ). ipation implies antipodal strategyproofness. We say that  satisfies antipodal strategyproofness if it not suscep- Proof. Working on the contrapositive, suppose that  is suscep- tible to antipodal manipulation by any agent 8 ∈ # in any profile. tible to antipodal manipulation. We will prove that  is susceptible As is the case for participation, antipodal strategyproofness is a to no-show manipulation too. We know that there exists 8 ∈ # weaker notion of strategyproofness as far as the MaxHam and the such that  (P , ) ≻˚  (P , ), for some profile P . This means MaxEq rules are concerned. −8 8 8 −8 8 ′ ∈ (P ) ∈ (P ) ′ ≻ In voting theory, Sanver and Zwicker [61] show that participa- that there exist  −8, 8 and  −8, 8 with 8 . tion implies antipodal strategyproofness (or half-way monotonic- Equivalently, ( ′) < ( ) ity, as called in that framework) for rules that output a single win-  8,  8, (1) ′′ ning alternative. Notably, this is not always the case in our model Next, consider a judgment ∈  (P−8). ′′ ′ (see Example 1). This is not surprising, as obtaining such a result If  ( 8, ) <  ( 8, ), then  is susceptible to no-show manip- independently of the preference extension would be significantly ulation by agent 8 in the profile (P−8, 8). ′ ′′ stronger than the result by Sanver and Zwicker [61]. We are, how- Otherwise,  ( 8, ) ≤  ( 8, ). Then Lemma 1 implies that ′′ ′ ever, able to reproduce this relationship between participation and  ( 8, ) ≤  ( 8, ). So, together with Inequality (1), we have ′′ strategyproofness in Theorem 1, for a specific type of preferences. that  ( 8, ) <  ( 8, ). This means that  is susceptible to no- show manipulation by agent 8 in the profile (P−8, 8).  Example 1. We present a rule that satisfies participation but vio- lates antipodal strategyproofness. The other direction admits a simi- We next prove that any rule satisfying the maximin property is im- lar example, and is thus omitted. Note that the rule demonstrated is mune to both no-show manipulation and antipodal manipulation quite unnatural for simplicity of the presentation. (Theorem 2), while this is not true for the equity property (Propo- Consider an agenda Φ with J(Φ) = {00, 01, 11}.11 We construct sition 5).12 We emphasise that the theorem holds for all prefer- an anonymous rule  that is only sensitive to which judgments are ence extensions. These results—holding for two independent no- submitted and not to their quantity: tions of strategyproofness—are significant for two reasons. First,  (00) =  (11) =  (01, 00) =  (00, 11) = {01, 11}; they bring to light the conditions under which we can have our  (01) = {00, 11};  (01, 11) =  (01, 00, 11) = {01}. cake and eat it too, simultaneously satisfying an egalitarian prop- erty and a degree of strategyproofness. In addition, they provide a For the pessimisticpreference, no agent can be strictlybetteroffbyab- further way to distinguish between the properties of maximin and staining. However, compare the profiles (01, 00) and (01, 11): agent 2 equity: the former is better suited in contexts where we may worry with truthful judgment 00 can move from outcome {01, 11} to out- about the agents’ strategic behaviour. come {01}, which is strictly better for her. Theorem 2. The maximin property implies participation and an- While the two axioms are independent in the general case, par- tipodal strategyproofness. ticipation implies antipodal strategyproofness (Theorem 1) if we stipulate that Proof. We prove the participation case; the proof for antipodal strategyproofness is analogous, and utilises Lemma 1. • - ≻˚ . if and only if there exist some ∈ - and ′ ∈ . 8 Suppose for contradiction that  is a rule that satisfies the max- such that ≻ ′ and { , ′} * - ∩ . . 8 imin property but violates participation. Then there must exist If a preference satisfies the above condition, we say that it is deci- agent 8 ∈ # and profile P where 8 is agent 8’s truthful judgment, sive. This condition gives rise to a preference extension equivalent such that  (P−8) ≻˚ 8  (P ). This means there must exist judgments ′ ′ ′ to the large preference extension of Kruger and Terzopoulou [48]. ∈  (P ) and ∈  (P−8 ) such that ≻8 and { , } *  (P ) ∩ ′ Note that a decisive preference is not necessarily acyclic—in fact,  (P−8). Because agent 8 strictly prefers to , this means that ′ it may even be symmetric. The interpretation of such a preference  ( 8, ) >  ( 8, ). We consider two cases. extension is slightly different than the usual one; when we say that Case 1: Suppose that ′ ∉  (P ). Let : be the distance between a rule is strategyproof for a decisive preference where both ≻˚ ′ the worst off agent’s judgment in P and any judgment in  (P ). ′ ′ and ≻˚ hold, we mean that no agent 8 with ≻˚ 8 and no Then, ≠ ′ ′ agent 9 8 with ≻˚ 9 will ever have an incentive to manipulate.  ( 9′, ) ≤ : for all 9 ∈ # . (2) Using Lemma 1, we can now prove a result analogous to the one ′ We know that  ( 8, ) < : because  ( 8, ) ≤ :, and agent 8 in voting theory, to give a complete picture of how these axioms strictly prefers ′ to . From Inequality (2), this means that if ′ relate to each other in judgment aggregation. 12Note that antipodal strategyproofness is not so weak a requirement that is im- mediately satisfied by all “utilitarian” aggregation rules. For example, the Copeland 11For other agendas we can simply take the rule to be constant. voting rule fails the analogous axiom of half-way monotonicity [66]. is not among the outcomes in  (P ), there has to be some 9 ∈ rules can be solved using the paradigm of Answer Set Program- ′ # \ {8} such that  ( 9, ) > :. But all judgments submitted to ming. profile (P−8) by agents in # \{8} are at most at distance : from by Inequality (2), so would be selected by any rule satisfying the 5.1 Computational Complexity maximin property will select as an outcome of  (P−8)—instead We investigate some computational complexity aspects of the judg- ′ of , a contradiction. ment aggregation rules that we have considered. Due to space con- ′ Case 2: Suppose that ∈  (P ), meaning that ∉  (P−8). Anal- straints, we will only describe the main lines of these results—for ′ ogously to the first case, let : be the distance between the worst full details, we refer to the accompanying Appendix. off agent’s judgment in P−8 and any judgment in  (P−8). Then, Consider the problem of outcome determination (for a rule ). ′ ′ ′  ( 9′, ) ≤ : for all 9 ∈ # \{8}. (3) This is most naturally modelled as a search problem, where the in- put consists of an agenda Φ and a profile P = ( ,..., )∈J(Φ)=. ∉ 1 = Moreover, since  (P−8), it is the case that The problem is to produce some judgment set ∗ ∈  (P ). We will ′  ( 9, ) > : for some 9 ≠ 8. (4) show that for the MaxEq rule, this problem can be solved in poly- nomial time with a logarithmic number of calls to an oracle for NP In profile P , Inequalities (3) and (4) still hold. In addition, we have search problems (where the oracle also produces a witness for yes that  ( , ) >  ( , ′) because agent 8 strictly prefers ′ to . So, 8 8 answers—also called an FNP witness oracle). Said differently, the for any rule satisfying the maximin property, judgment ′ will be outcome determination problem for the the MaxEq rule lies in the better as an outcome of  (P ) than , a contradiction.  complexity class FPNP[log,wit]. We also show that the problem is Corollary 1. The rule MaxHam satisfies antipodal strategyproof- complete for this class (using the standard type of reductions used ness and participation. for search problems: polynomial-time Levin reductions). Proposition 5. No rule that satisfies the equity property can satisfy Theorem 3. The outcome determination problem for the MaxEq NP participation or antipodal strategyproofness . rule is FP [log,wit]-complete under polynomial-time Levin reduc- tions. Proof. The following is a counterexample for antipodal strate- NP gyproofness. A similar one exists for participation. Proof (sketch). Membership in FP [log,wit] can be shown ′ by giving a polynomial-time algorithm that solves the problem by Consider the following profiles P = { 8, 9 } and P = (P−8, 8). querying an FNP witness oracle a logarithmic number of times. We give a visual representation of the profiles as well as the out- ′ The algorithm first finds the minimum value: of max ′, ′′ ∈P | ( , )− comes under an arbitrary rule  that satisfies the equity principle. ′′ We specify that J(Φ) = {00110, 00000, 01110, 10000, 11111}.  ( , )| by means of binary search—requiring a logarithmic num- ber of oracle queries. Then, with one additional oracle query, the al- ∗ ∗ ′ P P ′ gorithm can produce some ∈ J(Φ) with max ′, ′′ ∈P | ( , )−  (P ) 2 1  (P ′) 4 ′  ( ∗, ′′)| = :. 8 : 00000 8 : 11111 00110 10000 ′ To show FPNP[log,wit]-hardness, we reduce from the problem 9 : 01110 : 01110 1 4 4 9 of finding a satisfying assignment of a (satisfiable) propositional formulak that sets a maximum number of variables to true [14, 44]. Each edge from an individual judgment to a collective one is la- This reduction works roughly as follows. Firstly, we produce 3CNF belled with the Hamming distance between the two. It is clear that formulas k1,...,kE where each k8 is 1-in-3-satisfiable if and only agent 8 will benefit from her antipodal manipulation, as her true if there exists a satisfying assignment of k that sets at least 8 vari- judgment is much closer to the singleton outcome in P ′ than the ables to true. Then, for each 8, we transform k8 to an agenda Φ8 singleton outcome in P .  and a profile P8 such that there is a judgment set with equal Ham- ∈ P Corollary 2. The rule MaxEq does not satisfy participation or an- ming distance to each 8 if and only if k8 is 1-in-3-satisfiable. Φ P tipodal strategyproofness. Finally, we put the agendas 8 and profiles 8 together into a sin- gle agenda Φ and a single profile P such that we can—from the 5 COMPUTATIONAL ASPECTS outcomes selected by the MaxEq rule—read off the largest 8 for which k8 is 1-in-3-satisfiable, and thus, the maximum number of We have discussed two aggregation rules that reflect desirable egal- variables set to true in any truth assignment satisfying k. This last itarian principles—i.e., the MaxHam and MaxEq rules—and exam- step involves duplicating issues in Φ1,..., ΦE different numbers of ined whether they give agents incentives to misrepresent their truth- times, and creating logical dependencies between them. Moreover, ful judgments. In this section we consider how complex it is, com- we do this in such a way that from any outcome selected by the putationally, to employ these rules, and the complexity of deter- MaxEq rule, we can reconstruct a truth assignment satisfying k mining whether an agent can manipulate the collective outcome. that sets a maximum number of variables to true.  The MaxHam rule has been considered from a computational perspective before [40–42]. Here, we extend this analysis to the The result of Theorem 3 means that the computational complexity Θp MaxEq rule, and we compare the two rules with each other on their of computing outcomes for the MaxEq rule lies at the 2 -level of computational properties. Concretely, we primarily establish some the Polynomial Hierarchy. This is in line with previous results on computational complexity results; motivated by these results, we the computational complexity of the outcome determination prob- then illustrate how some computational problems related to these lem for the MaxHam rule—De Haan and Slavkovik [41] showed that a decision variant of the outcome determination problem for further details on this. Still, we shall mention that results demon- Θp the MaxHam rule is 2 -complete. Notably, our proof (presented in strating that strategic manipulation is very complex are generally detail in the Appendix) brings out an intriguing fact about a prob- more welcome than analogous ones regarding outcome determi- lem that is at first glance simpler than outcome determination for nation. If manipulation is considered a negative side-effect of the MaxEq: Given an agenda Φ and a profile P , deciding whether the agents’ strategic behaviour, knowing that it is hard for the agents 13 minimum value of max8,9 ∈# | ( 8, )−  ( 9, )| for ∈ J(Φ)— to materialise it is good news. In Section 5.2 we will revisit these Θp concerns from a different angle. the value that the MaxEq rule minimizes—is divisible by 4, is 2 - complete (Proposition 6). Intuitively, merely computing the mini- Θp mum value that is relevant for MaxEq is 2 -hard. 5.2 ASP Encoding for the MaxEq Rule Proposition 6. Given an agenda Φ and a profile P , decidingwhether The complexity results in Section 5.1 leave no doubt that apply- ∗ ′ ∗ ′′ ∗ ing our egalitarian rules is computationally difficult. Nevertheless, the minimal value of max ′, ′′ ∈P | ( , ) −  ( , )| for ∈ p J(Φ), is divisible by 4, is a Θ -complete problem. they also indicate that a useful approach for computing outcomes 2 of the MaxEq rule in practice would be to encode this problem Interestingly, we found that the problem of deciding if there exists into the paradigm of Answer Set Programming (ASP) [36], and to ∗ a judgment set ∈ J(Φ) that has the exact same Hamming dis- use ASP solving algorithms. ASP offers an expressive automated tance to each judgment set in the profile is NP-hard, even when reasoning framework that typically works well for problems at the agenda consists of logically independent issues. Θp the 2 level of the Polynomial Hierarchy. In this section, we will Proposition 7. Given an agenda Φ and a profile P , the problem of show how this encoding can be done—similarly to an ASP encod- ∗ ∗ ′ ing for the MaxHam rule [42]. Due to space restrictions, we refer decidingwhether there is some ∈ J(Φ) with max ′, ′′ ∈P | ( , )−  ( ∗, ′′)| = 0 is NP-complete. Moreover, NP-hardness holds even to the literature for details on the syntax and semantics of ASP— for the case where Φ consists of logically independent issues—i.e., e.g., [34, 36]. the case where J(Φ) = {0, 1}< for some <. We use the same basic setup that De Haan and Slavkovik [42] use to represent judgment aggregation scenarios—with some sim- This is also in line with previous results for the MaxHam rule— plifications and modifications for the sake of readability. In par- De Haan [40] showed that computing outcomes for the MaxHam ticular, we use the predicate voter/1 to represent individuals, we rule is computationally intractable even when the agenda consists use issue/1 to represent issues in the agenda, and we use js/2 to of logically independent issues. represent judgment sets—both for the individual voters and for a Next, we turn our attention to the problem of strategic manip- dedicated agent col that represents the outcome of the rule. ulation. Specifically, we show that—for the case of decisive pref- With this encoding of judgment aggregation scenarios, one can erences over sets of judgment sets—the problem of deciding if an add further constraints on the predicate js/2 that express which Σp agent 8 can strategically manipulate is in the complexity class 2 . judgment sets are consistent, based on the logical relations between Φ Proposition 8. Let  be a preference relation over judgment sets the issues in the agenda —as done by De Haan and Slavkovik [42]. that is polynomial-time computable, and let ˚ be a decisive exten- We refer to their work for further details on how this can be done. sion over sets of judgment sets. Then the problem of deciding if a Now, we show how to encode the MaxEq rule into ASP, similarly given agent 8 can strategically manipulate under the MaxEq rule— to the encoding of the MaxHam rule by De Haan and Slavkovik ′ ′ [42]. We begin by defining a predicate dist/2 to capture the Ham- i.e., given Φ and P , decidingif thereexists some P =−8 P with MaxEq(P ) ≻˚ 8 MaxEq(P )—is in the complexity class Σp. ming distance D between the outcome and the judgment set of an 2 agent A. Σp = NP Proof (sketch). To show membership in 2 NP , we de- 1 dist(A,D) :- voter(A), scribe a nondeterministic polynomial-time algorithm with access D = #count { X : issue(X), js(col,X), js(A,-X) }. to an NP oracle that solves the problem. The algorithm firstly guesses Then, we define predicates maxdist/1, mindist/1 and inequity/1 ′ ′ a new judgment set 8 for agent 8 in the new profile P , and guesses that capture the maximum Hamming distance from the outcome ′ a truth assignment witnessing that 8 is consistent. Then, using to any judgment set in the profile, the minimum such Hamming ′ the NP oracle, it computes the values : = max ′, ′′ ∈P | ( , ) − distance, and the difference between the maximum and minimum ′′ ′ ′ ′′ Φ  ( , )| and : = max ′, ′′ ∈P ′ | ( , )− ( , )|, for ∈ J( ). (or inequity), respectively. Finally, it guesses some , ′ ∈ J(Φ), together with truth assign- ′ ′ 2 maxdist(Max) :- Max = #max { D : dist(A,D) }. ments witnessing consistency, and it verifies that ≻8 , that ∈ 3 mindist(Min) :- Min = #min { D : dist(A,D) }. ′ ′ MaxEq(P ), that ∈ MaxEq(P ), and that { , } * MaxEq(P ) ∩ 4 inequity(Max-Min) :- maxdist(Max), mindist(Min). ′ MaxEq(P ). Since these final checks can all be done in polynomial Finally, we add an optimization constraint that states that only out- time—using the previously guessed and computed information— comes should be selected that minimize the inequity.14 one can verify that this can be implemented by an NPNP algo- 5 #minimize { I@30 : inequity(I) }. rithm.  Σp This 2 -membership result can straightforwardly be extended to other variants of the manipulation problem (e.g., no-show manip- 13Note though that hardness results regarding manipulation of our egalitarian rules remain an open question. ulation and antipodal manipulation) and to other preferences, as 14The expression “@30” in Line 5 indicates the priority level of this optimization well as to the MaxHam rule. Due to space constraints, we omit statement (we used the arbitrary value of 30, and priority levels lexicographically). For any answer set program that encodes a judgment aggrega- 10 _optimize(40,1,incl). tion setting, combined with Lines 1–5, it then holds that the op- To encode whether or not the guessed manipulation was suc- timal answer sets are in one-to-one correspondence with the out- cessful, we have to define a predicate successful/0 that is true if comes selected by the MaxEq rule. ′ ′ and only if (i) ≻8 and (ii) and are not both selected as out- Interestingly, we can readily modify this encoding to capture re- come by the MaxEq rulefor both P and P ′, where ′ is the outcome finements of the MaxEq rule. An example of this is the refinement encoded by the statements js(prime(col),X) and is the outcome that selects (among the outcomes of the MaxEq rule) the outcomes encoded by the statements js(col,X). Since we assume that ≻8 is that minimize the maximum Hamming distance to any judgment computable in polynomial time, and since we can efficiently check set in the profile. We can encode this example refinement by adding using statements in the answer set whether and ′ are selected the following optimization statement that works at a lower priority by the MaxEq rule for P and P ′, we know that we can define the level than the optimization in Line 5. predicate successful/0 correctly and succinctly in our encoding. 6 #minimize { Max@20 : maxdist(Max) }. For space reasons, we omit further details on how to do this. Then, we express another minimization statement (at a lower 5.3 Encoding Strategic Manipulation priority level than all other optimization statements used so far), We now show how to encode the problem of strategic manipula- that states that we should make successful true whenever possible. tion into ASP. The value of this section’s contribution should be Intuitively, we will use this to filter our guessed manipulations that viewed from the perspective of the modeller rather than from that are unsuccessful. of the agents. That is, even if we do not wish for the agents to be 11 unsuccessful :- not successful. able to easily check whether they can be better off by lying, it may 12 successful :- not unsuccessful. 13 _criteria(10,1,unsuccessful) :- unsuccessful. be reasonable, given a profile of judgments, to externally determine 14 _optimize(10,1,card). whether a certain agent can benefit from being untruthful. The simplest way to achieve this is with the meta-programming Finally, we feed the answer set program % that we constructed so far into the meta-programming method, resulting in a new (dis- techniques developed by Gebser et al. [35]. Their meta-programming ′ approach allows one to additionally express optimization statements junctive) answer set program % that uses no optimization state- that are based on subset-minimality, and to transform programs ments at all, and whose answer sets correspond exactly to the (lex- with this extended expressivity to standard (disjunctive) answer icographically) optimized answer sets of our program %. Since the new program % ′ does not use optimization, we can add additional set programs. We use this to encode the problem of strategic ma- ′ nipulation. constraint to % to remove some of the answer sets. In particular, Due to space reasons, we will not spell out the full ASP encod- we will filter out those answer sets that correspond to an unsuc- cessful manipulation—i.e., those containing the statement unsuccessful. ing needed to do so. Instead, we will highlight the main steps, and ′ describe how these fit together. We will use the example of MaxEq, Effectively, we add the following constraint to % : but the exact same approach would work for any other judgment 15 :- unsuccessful. aggregation rule that can be expressed in ASP efficiently using reg- As a result the only answer sets of % ′ that remain correspond ex- ular (cardinality) optimization constraints—in other words, for all actly to successful manipulations ′ for agent 8. Θp 8 rules for which the outcome determination problem lies at the 2 The meta-programming technique that we use uses the full dis- level of the Polynomial Hierarchy. Moreover, we will use the ex- junctive answer set programming language. For this full language, ample of a decisive preference ≻˚ over sets of judgment sets that Σp finding answer sets is a 2 -complete problem [21]. This is in line is based on a polynomial-time computable preference ≻ over judg- with our result of Proposition 8 where we show that the problem ment sets. The approach can be modified to work with other pref- Σp of strategic manipulation is in 2 . erences as well. The encoding that we described can straightforwardly be mod- ′ We begin by guessing a new judgment set 8 for the individual 8 ified for various variants of strategic manipulation (e.g., antipodal that is trying to manipulate—and we assume, w.l.o.g., that 8 = 1. manipulation). To make this work, one needs to express additional 7 voter(prime(1)). ′ constraints on the choice of the judgment set 8 . To adapt the en- 8 1 { js(prime(1),X), js(prime(1),-X) } 1 :- issue(X). coding for other preference relations ≻˚ , one needs to adapt the Then, we express the outcomes of the MaxEq rule, both for the definition of successful/0, expressing under what conditions an ′ non-manipulated profile P and for the manipulated profile P , us- act of manipulation is successful. ′ ing the dedicated agents col (for P ) and prime(col) (for P ). This is Our encoding using meta-programming is relatively easily un- done exactly as in the encoding of the problemof outcomedetermi- derstandable, since we do not need to tinker with the encoding of nation (so for the case of MaxEq, as described in Section 5.2)—with complex optimization constraints in full disjunctive answer set pro- the difference that optimization is expressed in the right format for gramming ourselves—this we outsource to the meta-programming the meta-programming method of Gebser et al. [35]. method. If one were to do this manually, there is more space for We express the following subset-minimality minimization state- tailor-made optimizations, which might lead to a better performance ment (at a higher priority level than all other optimization con- of ASP solving algorithms for the problem of strategic manipula- straints used so far). This will ensure that every possible judgment tion. It is an interesting topic for future research to investigate this, ′ set 8 will be considered as a subset-minimal solution. and possibly to experimentally test the performance of different 9 _criteria(40,1,js(prime(1),X)) :- js(prime(1),S). encodings, when combined with ASP solving algorithms. 6 CONCLUSION • for every string G ∈ ('1 and every solution ~ ∈ '2 (61 (G)) it We have introduced the concept of egalitarianism into the frame- holds that (G,62 (G,~)) ∈ '1. work of judgment aggregation and have presented how egalitarian One could also consider other types of reductions for search prob- and strategyproofness axioms interact in this setting. Importantly, lems, such as Cook reductions (an algorithm that solves '1 by mak- we have shown that the two main interpretations of egalitarian- ing one or more queries to an oracle that solves the search prob- ism give rise to rules with differing levels of protection against lem '2). For more details, we refer to textbooks on the topic—e.g., [37]. manipulation. In addition, we have looked into various computa- we will use complexity classes that are based on Turing ma- tional aspects of the egalitarian rules that arise from our axioms, chines that have access to an oracle. Let  be a complexity class in a twofold manner: First, we have provided worst-case complex- with decision problems. A Turing machine ) with access to a yes- ity results; second, we have shown how to solve the relevant hard no  oracle is a Turing machine with a dedicated oracle tape and problems using Answer Set Programming. dedicated states @oracle, @yes and @no. Whenever) is in the state @oracle, While we have axiomatised two prominent egalitarian princi- it does not proceed according to the transition relation, but in- ples, it remains to be seen whether other egalitarian axioms can stead it transitions into the state @yes if the oracle tape contains provide stronger barriers against manipulation. For example, in a string G that is a yes-instance for the problem , i.e., if G ∈ , parallel to majoritarian rules, one could define rules that minimise and it transitions into the state @no if G ∉ . Let  be a complexity the distance to some egalitarian ideal. Moreover, as is the case in class with search problems. Similarly, a Turing machine with ac- judgment aggregation, there is an obvious lack of voting rules de- cess to a witness  oracle has a dedicated oracle tape and dedicated signed with egalitarian principles in mind. We hope this paper states @oracle, @yes and @no. Also, whenever ) is in the state @oracle it opens the door for similar explorations in voting theory. transitions into the state @yes if the oracle tape contains a string G such that there exists some ~ such that  (G,~), and in addition A APPENDIX the contents of the oracle tape are replaced by (the encoding of) In this appendix, we will show that the outcome determination such an ~; it transitions into the state @no if there exists no ~ such Θp that  (G,~). Such transitions are called oracle queries. problem for the MaxEq rule boils down to a 2 -complete problem. In particular, we will show that the outcome determination prob- We consider the following complexity classes that are based on lem for the MaxEq rule, when seen as a search problem, is complete oracle machines. (under polynomial-time Levin reductions) for FPNP[log,wit]—which • The class PNP[log] consists of all decision problems that can Θp be decided by a deterministic polynomial-time Turing ma- is a search variant of 2 . Then, we show that the problem of find- ing the minimum difference between two agents’ satisfaction, and chine that has access to a yes-no NP oracle, and on any input Θp of length = queries the oracle at most $ (log=) many times. deciding if this value is divisible by 4, is a 2 -complete problem. ∗ NP Along the way, we show that deciding if there is a judgment set ∈ This class coincides with the class P|| (spoken: “parallel ac- Φ Θp J( ) that has the same Hamming distance to each judgment set cess to NP”), and is also known as 2 . in a given profile P is NP-hard, even in the case where the agenda Incidentally, allowing the algorithms access to a witness FNP consists of logically independent issues. oracle instead of access to a yes-no NP oracle leads to the We begin, in Section A.1, by recalling some notions from compu- same class of problems, i.e., the class PNP[log,wit] that coin- tational complexity theory—in particular, notions related to search cides with PNP[log] (cf. [46, Corollary 6.3.5]). problems. Then, in Section A.2, we establish the computational • The class FPNP[log,wit] consists of all search problems that complexity results mentioned above. can be solved by a deterministic polynomial-time Turing machine that has access to a witness FNP oracle, and on A.1 Additional complexity-theoretic any input of length = queries the oracle at most $ (log=) preliminaries many times. In a sense, it is the search variant of PNP[log]. We will consider search problems. Let Σ be an alphabet. A search This complexity class happens to coincide with the class problem is a binary relation ' over strings in Σ∗. For any input FNP//OptP[log], which is defined as the set of all search string G ∈ Σ∗, we let '(G) = { ~ ∈ Σ∗ | (G,~) ∈ ' } denote the problems that are solvable by a nondeterministic polynomial- set of solutions for G. We say that a Turing machine ) solves ' time Turing machine that receives as advice the answer to if on input G ∈ Σ∗ the following holds: if there exists at least one “NP optimization” computation [14, 44]. one ~ such that (G,~) ∈ ', then ) accepts G and outputs some ~ such that (G,~) ∈ '; otherwise, ) rejects G. With any search prob- A.2 Complexity proofs for the MaxEq rule lem ' we associate a decision problem (', defined by (' = { G ∈ We define the search problem of outcome determination for a judg- Σ∗ | there exists some ~ ∈ Σ∗ such that (G,~) ∈ ' }. We will use ment aggregation rule  as follows. The input for this problem con- = the following notion of reductions for search problems. A polynomial- sists of an agenda Φ, a profile P = ( 1,..., =)∈J(Φ) . The prob- ∗ time Levin reduction from one search problem '1 to another search lem is to output some judgment set ∈  (P ). In other words, the Φ ∗ problem '2 is a pair of polynomial-time computable functions (61,62) problem is the relation ' that consists of all pairs (( , P ), ) such = ∗ such that: that P ∈ J(Φ) is a profile and ∈  (P ). We will show that the outcomedetermination problem for the MaxEq rule is completefor • the function 61 is a many-one reduction from ('1 to ('2 , i.e., ∗ the complexity class FPNP[log,wit] under polynomial-time Levin for every G ∈ Σ it holds that G ∈ (' if and only if 61 (G) ∈ 1 reductions. ('2 . ′ In order to do so, we begin with establishing a lemma that will in 2: , all three of :,1, :,2, :,3 contain ¬~9 , ¬~9 . Finally, the judg- NP be useful for the FP [log,wit]-hardness proof. This lemma uses ment set :,1 contains ¬I1,..., ¬I4,I5, the judgment set :,2 con- the notion of 1-in-3-satisfiability. Let k be a propositional logic for- tains ¬I1, ¬I2,I3,I4,I5, and the judgment set :,3 contains I1,I2, ¬I3, ¬I4,I5. mula in 3CNF, i.e., k = 21 ∧···∧ 2<, where each 28 is a clause con- This is illustrated in Figure 3 for the example clause (G1 ∨¬G2 ∨G3). taining exactly three literals. Then k is 1-in-3-satisfiable if there exists a truth assignment U that satisfies exactly one of the three ′ ′ ′ ~1 ~2 ~3 ··· ~1 ~2 ~3 ··· I1 I2 I3 I4 I5 literals in each clause 28 . :,1 1 0 1 ··· 0 1 0 ··· 00001 Lemma A.1. Let k be a 3CNF formula with clauses 21,...,21 that 0 1 0 ··· 1 0 1 ··· 00111 are all of size exactly 3 and with = variables G1,...,G=, such that :,2 (1) no clause ofk contains complementary literals, and (2) there exists :,3 0 1 0 ··· 1 0 1 ··· 11001 ∗ some G ∈ {G1,...,G= } and apartial truth assignment V : {G1,...,G= }\ {G∗}→{0, 1} that satisfies exactly one literal in each clause where G∗ Figure 3: Illustration of the construction of the judgment ∗ ∗ ∗ or ¬G occurs, and satisfies no literal in each clause where G or ¬G sets :,1, :,2, :,3 for the example clause 2: = (G1 ∨¬G2 ∨ G3) occurs. We can, in polynomial time given k, construct an agenda Φ in the proof of Lemma A.1. on < issues such that J(Φ) = {0, 1}< and a profile P over Φ, such that: The profile P then consists of the judgment sets 1,..., 2=, as Φ = ′ • { ~8,~8 | 1 ≤ 8 ≤ = }∪{I1,...,I5}; well as the judgment sets :,1, :,2, :,3 for each 1 ≤ : ≤ <. In • there exists a judgment set ∈ J(Φ) that has the same Ham- order to prove that P has the required properties, we consider the mingdistancetoeach ′ ∈ P ifandonlyifk is 1-in-3-satisfiable; following observations and claims (and prove the claims). • if k is not 1-in-3-satisfiable, then for each judgment set ∈ ′ ′ ′′ Observation 1. If a judgmentset contains exactly one of ~8 and~8 J(Φ) it holds that max ′, ′′ ∈P | ( , )− ( , )| ≥ 2, and ′ for each 8, then the Hamming distance from to each of 1,..., 2=— there exists some ∈ J(Φ) such that max ′ ′′ | ( , )− , ∈P restricted to the issues ~ ,...,~ ,~′,...,~′ —is exactly =.  ( , ′′)| = 2; 1 = 1 = ′ • the above two propertieshold also when restricted to judgment Claim 2. If a judgment set contains both ~8 and ~8 for some 8, or ′ ′ sets that contain exactly one of ~8 and ~8 for each 1 ≤ 8 ≤ =, neither ~8 nor ~8 for some 8, then there are at least two judgment sets and that contain ¬I1,..., ¬I4,I5; and among 1,..., 2= such that the Hamming distance from to these • the number of judgmentsets in the profile P only depends on = two judgment sets differs by at least 2. and 1. ′ Proof of Claim 2. We argue that this is the case for ~1 and ~1, Proof. We let the agenda Φ consist of the 2=+5 issues ~1,...,~=, i.e., for the case of 8 = 1. For other values of 8, an entirely similar ′ ′ Φ = 2=+5 ~1,...,~=,I1,...,I5. It follows directly that J( ) {0, 1} . argument works. ′ Then, we start by constructing 2= judgment sets 1,..., 2= over Picking both ~1 and ~1 to be part of a judgment set adds to the these issues, defined as depicted in Figure 2. Hamming distances between , on the one hand, and 1,..., 2=, + on the other hand, according to the vector E1 : ··· ′ ′ ··· ′ ··· ~1 ~2 ~= ~1 ~2 ~= I1 I5 + = ( ) E1 0, 2,..., 2, 2, 0,..., 0 . 1 0 ··· 0 1 0 ··· 0 0 ··· 0 | {z } | {z } 1 =−1 =−1 2 0 1 ··· 0 0 1 ··· 0 0 ··· 0 ′ Picking both ¬~1 and ¬~1 to be part of the set adds to the Ham- ...... ming distances between and 1,..., 2= according to the vector E1: = 0 0 ··· 1 0 0 ··· 1 0 ··· 0 − = ( ) E1 2, 0,..., 0, 0, 2,..., 2 . 0 1 ··· 1 0 1 ··· 1 0 ··· 0 | {z } | {z } =+1 =−1 =−1 1 0 ··· 1 1 0 ··· 1 0 ··· 0 =+2 Picking exactly one of ~ and ~′ and exactly one of ¬~ and ¬~′ ...... 1 1 1 1 ...... to be part of corresponds to the all-ones vector 1 = (1,..., 1). ′ 2= 1 1 ··· 0 1 1 ··· 0 0 ··· 0 More generally, both ~8 and ~8 to bepart of adds to the Hamming + distances according to the vector E8 : + = ( ) Figure 2: Construction of the judgment sets 1,..., 2= in the E8 2,..., 2, 0, 2,..., 2, 0,..., 0, 2, 0,..., 0 , proof of Lemma A.1. | {z } | {z } | {z } | {z } 8−1 =−8+2 8−1 =−8+2 picking both ¬~ and ¬~′ corresponds to the vector E−: Then, for each clause 2: ofk, we introduce three judgment sets :,1, 8 8 8 − :,2, and :,3 that are defined as follows. The judgment set :,1 E = (0,..., 0, 2, 0,..., 0, 2,..., 2, 0, 2,..., 2), ′ 8 contains ~8, ¬~8 for each positive literal G8 occurring in 2: , and | {z } | {z } | {z } | {z } ′ 8−1 =−8+2 8−1 =−8+2 contains ~8 , ¬~8 for each negative literal ¬G8 occurring in 2: . Con- ′ and picking exactly one of ~ and ~′ and exactly one of ¬~ and ¬~′ versely, the judgment sets :,2, :,3 contain ~8 , ¬~8 for each positive 8 8 8 8 ′ tobe partof corresponds to the all-ones vector 1 = (1,..., 1). For literal G8 occurring in 2: , and contain ~8, ¬~8 for each negative lit- eral ¬G8 occurring in 2: . For each variable G 9 that does not occur − − + ′ each 1 ≤ 8 ≤ =, the vectors E1 ,...,E= and E8 are linearly indepen- distance to each ∈ P . Suppose, conversely, that k is not 1-in- + + − dent, and the vectors E1 ,...,E= and E8 are linearly independent. 3-satisfiable. Then by Claims 2 and 4, there exist two judgment ′ ′ ′′ ′ ′′ Suppose now that we pick to contain both ~1 and ~1. Suppose, sets , ∈ P such that  ( , ) and  ( , ) differ (by at least moreover, to derive a contradiction, that the Hamming distance 2). Thus, k is 1-in-3-satisfiable if and only if there exists a judg- from to each of the judgment sets 1,..., 2= is the same. This ment set ∈ J(Φ) that has the same Hamming distance to each − = means that there is some way of choosing B2,...,B= such that E1 judgment set in the profile P . B8 + + − Í1<8 ≤= E , which contradicts the fact that E ,...,E= and E are Suppose that k is not 1-in-3-satisfiable. Then by Claims 2 and 4, 8 1 8 ′ − − = + we know that for each ∈ J(Φ) it holds that max ′ ′′ | ( , )− linearly independent—since each E 9 can be expressed as E 9 E8 + , ∈P − +  ( , ′′)| ≥ 2. Moreover, by Claim 5, there exists a judgment set ∈ E8 − E 9 . Thus, we can conclude that there exist at least two judg- Φ ′ ′′ ′ ′′ = ment sets among ,..., such that the Hamming distance from J( ) such that max , ∈P | ( , )−  ( , )| 2. 1 2= Φ = < to these two judgment sets differs. Moreover, since all vectors con- We already observed that that J( ) {0, 1} for some <. tain only even numbers, and the coefficients in the sum are integers Moreover, one can straightforwardly verify that the statements af- (in fact, either 0 or 1), we know that the difference must be even ter the first two bullet points in the statement of the lemma also hold when restricted to judgment sets that contain exactly one and thus at least 2. ′ An entirely similar argument works for the case where we pick of ~8 and ~8 for each 1 ≤ 8 ≤ = and that contain ¬I1,..., ¬I4,I5. ′ This concludes the proof of the lemma.  to contain both ¬~1 and ¬~1. ⊣ Observation 3. Let U : {G ,...,G }→{0, 1} be a truth assign- Now that we have established the lemma, we continue with the 1 = NP ment that satisfies exactly one literal in each clause ofk. Consider the FP [log,wit]-completeness proof. = ′ = ′ judgment set U { ~8, ¬~8 | 1 ≤ 8 ≤ =, U (G8 ) 1 }∪{ ~8 , ¬~8 | 1 ≤ Theorem 3. The problem of outcome determination for the MaxEq 8 ≤ =,U (G8 ) = 0 }∪{¬I1,..., ¬I4,I5}. Then the Hamming distance rule is FPNP[log,wit]-complete under polynomial-time Levin reduc- from U to each judgment set in the profile P is exactly = + 1. tions. Claim 4. Suppose thatk is not 1-in-3-satisfiable. Then for each judg- Proof. To show membership in FPNP[log,wit], we describe an ment set that contains exactly one of ~ and ~′ for each 8, there is 8 8 algorithm with access to a witness FNP oracle that solves the prob- some clause 2 of k such that the difference in Hamming distance : lem in polynomial time by making at most a logarithmic number of from to (two of) , , is at least 2. :,1 :,2 :,3 oracle queries. This algorithm will use an oracle for the following Proof of Claim 4. Take an arbitrary judgment set that con- FNP problem: given some : ∈ N and given the agenda Φ and the ′ Φ ′ tains exactly one of ~8 and ~ for each 8. This judgment set corre- profile P , compute a judgment set ∈ J( ) such that max ′, ′′ ∈P | ( , )− 8 ′′ sponds to the truth assignment U : {G1,...,G= }→{0, 1} defined  ( , )| ≤ :, if such a exists, and return “none” otherwise. By Φ such that for each 1 ≤ 8 ≤ = it is the case that U (G8 ) = 1 if ~8 ∈ using $ (log | |) queries to this oracle, one can compute the mini- ′ ′′ and U (G8 ) = 0 if ~8 ∉ . Since k is not 1-in-3-satisfiable, we know mum value :min of max ′, ′′ ∈P | ( , )− ( , )| where the min- Φ that there exists some clause 2ℓ such that U does not satisfy exactly imum is taken over all judgment sets ∈ J( ). Then, with a final one literal in 2ℓ . We distinguish several cases: either (i) U satisfies query to the oracle, using : = :min, one can use the oracle to pro- ∗ no literals in 2ℓ , or (ii) U satisfies two literals in 2ℓ , or (iii) U satis- duce a judgment set ∈ MaxEq(P ). NP fies three literals in 2ℓ . In each case, the Hamming distances from , We will show FP [log,wit]-hardness by giving a polynomial- NP on the one hand, and :,1, :,2, :,3, on the other hand, must differ time Levin reduction from the FP [log,wit]-complete problem of by at least 2. This can be verified case by case—and we omit a fur- finding a satisfying assignment for a (satisfiable) propositional for- ther detailed case-by-case verification of this. ⊣ mula k that sets a maximum number of variables to true (among any satisfying assignment of k) [14, 44, 47, 65]. Let k be an arbi- Claim 5. If k is not 1-in-3-satisfiable, then there exists a judgment ′ ′′ trary satisfiable propositional logic formula with E variables. With- ′ ′′ | ( )− ( )| = set such that max , ∈P  ,  , 2. out loss of generality, assume that E is even and that there is a Proof of Claim 5. Suppose that k is not 1-in-3-satisfiable. We satisfying truth assignment for k that sets at least one variable to ∗ Φ know that there exists a variable G ∈ {G1,...,G= } and a partial true. We will construct an agenda and a profile P , such that the ∗ ∗ ′ ∗ ′′ ∗ truth assignment V : {G1,...,G= }\{G }→{0, 1} that satisfies minimum value of max ′, ′′ ∈P | ( , )− ( , )|, for any ∈ exactly one literal in each clause where G∗ or ¬G∗ does not oc- J(Φ), is divisible by 4 if and only if the maximum number of vari- cur, and satisfies no literal in each clause where G∗ or ¬G∗ occurs. ables set to true in any satisfying assignment of k is odd. More- ∗ Φ ∗ Without loss of generality, suppose that G = G1. Now consider over, we will construct and P in such a way that from any ∈ = ′ ′ < = MaxEq(Φ) we can construct, in polynomial time, a satisfying as- the judgment set V {¬~1, ¬~1}∪{ ~8, ¬~8 | 1 8 ≤ =, V (G8) ′ < = signment of k that sets a maximum number of variables to true. 1 }∪{ ~8 , ¬~8 | 1 8 ≤ =, V (G8) 0 } ∪ {¬I1,..., ¬I4,I5}. Φ P One can verify that the Hamming distances from V, on the one This—together with the fact that and can be constructed in hand, and ′ ∈ P , on the other hand, differ by at most 2—and for polynomial time—suffices to exhibit a polynomial-time Levin re- NP some ′, ′′ ∈ P it holds that | ( , ′)−  ( , ′′)| = 2. ⊣ duction, and thus to show FP [log,wit]-hardness. We proceed in several stages (i–iv). We now use the above observations and claims to show that Φ (i) We begin, in the first stage, by constructing a 3CNF for- and P have the required properties. If k is 1-in-3-satisfiable, by Ob- mula k8 with certain properties, for each 1 ≤ 8 ≤ E. servation 3, there is some ∈ J(Φ) that has the same Hamming Claim 6. We can construct in polynomial time, for each 1 ≤ 8 ≤ E, Since the formulas k8, as described in Claim 6, satisfy the re- a 3CNF formula k8, that is 1-in-3-satisfiable if and only if there is a quirements for Lemma A.1, we can construct agendas Φ1,..., ΦE truth assignment that satisfies k and that sets at least 8 variables ink and profiles P1,..., PE such that for each 1 ≤ 8 ≤ E, the agenda Φ8 to true. Moreover, we construct these formulas k8 in such a way that and the profile P8 satisfy the conditions mentioned in the statement they all contain exactly the same variables G1,...,G= and exactly of Lemma A.1. Moreover, we can construct the agendas Φ1,..., ΦE the same number 1 of clauses, and such that each formula k8 has the in such a way that they are pairwise disjoint. For each 1 ≤ 8 ≤ ′ ′ Φ properties: E, let ~8,1,...,~8,=,~8,1,...,~8,=,I8,1,...,I8,5 denote the issues in 8 • that it contains no clause with complementary literals, and and let P8 = ( 8,1,..., 8,D). ∗ • thatitcontains a variable G and a partialtruth assignment V : (ii) Then, in the second stage, we will use the profiles Φ1,..., ΦE ∗ {G1,...,G= }\{G }→{0, 1} that satisfies exactly one literal and the profiles P1,..., PE to construct a single agenda Φ and in each clause where G∗ or ¬G∗ does not occur, and satisfies no a single profile P . literal in each clause where G∗ or ¬G∗ occurs. Φ ′ We let contain the issues ~8,9 , ~8,9 and I8,1,...,I8,5, for each 1 ≤ Proof of Claim 6. Consider the problems of deciding if a given 8 ≤ E and each 1 ≤ 9 ≤ =, as well as issues F8,ℓ,: for each 1 ≤ 8 ≤ E, truth assignment U to the variables in k satisfies k, and deciding each 1 ≤ ℓ ≤ D and each 1 ≤ : ≤ =. We let P contain judgment ′ if U satisfies at least 8 variables, for some 1 ≤ 8 ≤ E. These prob- sets 8,ℓ for each 1 ≤ 8 ≤ E and each 1 ≤ ℓ ≤ D, that we will define ′ lems are both polynomial-time solvable. Therefore, by using stan- below. Intuitively, for each 8, the sets 8,ℓ will contain the judgment dard techniques from the proof of the Cook-Levin Theorem [16], sets 8,1,..., 8,D from the profile P8 . we can construct in polynomial time a propositional formula j in Take an arbitrary 1 ≤ 8 ≤ E, and an arbitrary 1 ≤ ℓ ≤ D. We ′ Φ 3CNF containing (among others) the variables C1,...,CE, the vari- let 8,ℓ agree with 8,ℓ (from P8 ) on all issues from 8—i.e., the is- † ′ able G and the variables in k such that any truth assignment to sues ~8,9 , ~8,9 for each 1 ≤ 9 ≤ = and I8,1,...,I8,5. On all issues i † Φ ′ ′ ′ ≠ ′ = the variables G ,C1,...,CE and the variables in k can be extended to from each 8 , for 1 ≤ 8 ≤ E with 8 8, we let 8,ℓ (i) 0. Then, † ′ ′ ′ ′ ′ = = = a satisfying truth assignment for j if and only if either (i) it sets G we let 8,ℓ (F8 ,ℓ ,: ) 1 if and only if 8 8 and ℓ ℓ . In other to true, or (ii) it satisfies k and for each 1 ≤ 8 ≤ E it sets C to false ′ Φ 8 words, 8,ℓ agrees with 8,ℓ on the issues from 8, it sets every issue if and only if it sets at least 8 variables among the variables in k from each other Φ8′ to false, it sets all the issues F8,ℓ,: to true, and to true. Then, we can transform this 3CNF formula j to another it sets all other issues F8′,ℓ′,: to false. ′ 3CNF formula j with a similar property—namely that any truth (iii) In the third stage, we will replace the logically independent † assignment to the variables G ,C1,...,CE and the variables in k can issues in Φ by other issues, in order to place restrictions on be extended to a truth assignment that satisfies exactly one literal the different judgment sets that are allowed. in each clause of j if and only if either (i) it sets G† to true, or (ii) it We start by describing a constraint—in the form of a propositional satisfies k and for each 1 ≤ 8 ≤ E it sets C to false if and only 8 logic formula Γ on the original (logically independent) issues in Φ— if it sets at least 8 variables among the variables in k to true. We and then we describe how this constraint can be used to produce do so by using the the polynomial-time reduction from 3SAT to replacement formulas for the issues in Φ. We define Γ = Γ ∨ Γ by: 1-IN-3-SAT given by Schaefer [62]. 1 2 Then, for each particular value of 8, we add two clauses that Γ = Ô Ó F ∧ Ó ¬F ′ ′ intuitively serve to ensure that variable C8 must be set to false in any 1 © 8,ℓ,: 8 ,ℓ ,: ª 1≤8≤E ­1≤: ≤= 1≤8′ ≤E,1≤ℓ ≤D ® 1-in-3-satisfying truth assignment. We add (B ∨B ∨C ) and (¬B ∨ 1≤ℓ ≤D 8≠8′ or ℓ≠ℓ′ 0 1 8 0 « ¬ ¬B ∨C ), where B and B are fresh variables—the only way to satisfy 1 8 0 1 Γ = ′ 2 © Ó ¬F8,ℓ,: ª ∧ © Ó (~8,9 ↔¬~8,9 )ª exactly one literal in both of these clauses is to set exactly one of B0 ­ 1≤8≤E,1≤ℓ ≤D ® ­ 1≤8≤E ® and B1 to true, and to set C8 to false. 1≤:≤= 1≤9 ≤= † † ∗ « ¬ « ¬ Moreover, we add the clauses (A0∨A1∨G ), (¬A0∨A2∨¬G ), (G ∨ In other words, Γ requires that either (1) for some8, ℓ all issuesF8,ℓ,: ∗ ∗ A3 ∨ A0), and (¬G ∨ A4 ∨ A0), where A0,...,A4 and G are fresh vari- are set to true, and all other issues F8′,ℓ′,: are set to false, or (2) all ables. These clauses serve to ensure the property that there always issues F8,ℓ,: are set to false and for each 8, 9 exactly one of ~8,9 ∗ ′ exists a partial truth assignment V : {G1,...,G= }\{G }→{0, 1} and ~8,9 is set to true. that satisfies exactly one literal in each clause where G∗ or ¬G∗ does Because we can in polynomial time compute a satisfying truth not occur, and satisfies no literal in each clause where G∗ or ¬G∗ assignment for Γ, we know that we can also in polynomial time occurs. Moreover, these added clauses preserve 1-in-3-satisfiability compute a replacement i ′ for each i ∈ Φ, resulting in an agenda Φ′, if and only if there exists a 1-in-3-satisfying truth assignment for so that the logically consistent judgment sets ∈ J(Φ′) corre- the formula without these added clauses that sets G† to true. spond exactly to the judgment sets ∈ J(Φ) that satisfy the con- Putting all this together, we have constructed a 3CNF formula j ′′— straint Γ [27, Proposition 3]. In the remainder of this proof, we consisting of j ′ with the addition of the six clauses mentioned in will use Φ (with the restriction that judgment sets should satisfy Γ) the above two paragraphs—that has the right properties mentioned interchangeably with Φ′. ′′ in the statement of the claim. In particular, j is 1-in-3-satisfiable (iv) Finally, in the fourth stage, we will duplicate some issues i ∈ if and only if there is a truth assignment that satisfies k and that Φ a certain number of times, by adding semantically equiva- sets at least 8 variables in k to true. Moreover, the constructed for- lent (yet syntactically different) issues to the agenda Φ, and ′′ mula j has the same variables and the same number of clauses, by updating the judgment sets in the profile P accordingly. regardless of the value of 8 chosen in the construction. ⊣ For each 1 ≤ 8 ≤ E, we will define a number 28 and will make This concludes our description and analysis of the polynomial- sure that there are 28 (syntactically different, logically equivalent) time Levin reduction, and thus of our hardness proof.  Φ Φ copies of each issue in that originated from the agenda 8 . In Φ Φ Proposition 5. Given an agenda and a profile P , decidingwhether other words, we duplicate each agenda 8 a certain number of ∗ ′ ∗ ′′ ∗ Φ the minimal value of max ′, ′′ ∈P | ( , ) −  ( , )| for ∈ times, by adding 28 − 1 copies of each issue that originated from 8. p J(Φ), is divisible by 4, is a Θ -complete problem. For each 1 ≤ 8 ≤ E, we let 28 = (E − 8). 2 This concludes the description of our reduction—i.e., of the pro- Proof. This follows from the proof of Theorem 3. The proof of file Φ and the profile P . What remains is to show that the minimum membership in FPNP[log,wit] for the outcome determination prob- ∗ ′ ∗ ′′ ∗ Φ value of max ′, ′′ ∈P | ( , )−  ( , )|, for any ∈ J( ), is lem for the MaxEq rule can directly be used to show membership divisible by 4 if and only if the maximum number of variables set Θp in 2 for this problem. Moreover, the polynomial-time Levin re- to true in any satisfying assignment of k is odd. To do so, we begin duction used in the FPNP[log,wit]-hardness proof can be seen as a with stating and proving the following claims. polynomial-time (many-to-one) reduction from the Θp-complete ∗ 2 Claim 7. For any judgment set ∈ J(Φ) that satisfies Γ1, the problem of deciding if the maximum number of variables set to ∗ ′ ∗ ′′ value of max ′, ′′ ∈P | ( , )−  ( , )| is at least 2=. true in any satisfying assignment of a (satisfiable) propositional ∗ formula k is odd [47, 65].  Proof of Claim 7. Take some that satisfies Γ1. Then there must exist some 8, ℓ such that sets allF8,ℓ,: totrueand allotherF8′,ℓ′,: Proposition 6. Given an agenda Φ and a profile P , the problem of ∗ ′ ∗ ′ ′ ′ ′ ∗ Φ ∗ ′ to false. Therefore, | ( , 8,ℓ)− ( , 8 ,ℓ )| is at least 2=, for any 8 , ℓ decidingwhether there is some ∈ J( ) with max ′, ′′ ∈P | ( , )− such that (8, ℓ) ≠ (8′, ℓ ′). ⊣  ( ∗, ′′)| = 0 is NP-complete. Moreover, NP-hardness holds even Φ ∗ Φ Γ for the case where consists of logically independent issues—i.e., Claim 8. For any judgment set ∈ J( ) that satisfies 2, the Φ < ∗ ′ ∗ ′′ the case where J( ) = {0, 1} for some <. value of max ′, ′′ ∈P | ( , )−  ( , )| is strictly less than 2E. Proof (sketch). The problem of deciding if there exists a truth Proof of Claim 8. Take some ∗ that satisfies Γ . Then ∗ sets 2 assignment that satisfies a given 3CNF formula k and that sets at each F to false, and for each 1 ≤ 8 ≤ E the judgment set con- 8,ℓ,: least, say, 2 variables among k to true is an NP-complete prob- tains ¬I ,..., ¬I ,I and contains exactly one of ~ and ~′ 8,1 8,4 8,5 8,9 8,9 lem. Then, by combining Claim 6 in the proof of Theorem 3 with for each 1 ≤ 9 ≤ =. Moreover, without loss of generality, we ∗ Lemma A.1, we directly get that the problem of deciding whether may assume that , for each 1 ≤ 8 ≤ E, assigns truth values ∗ ∗ ′ ∗ ′′ there is some ∈ J(Φ) with max ′ ′′ | ( , )− ( , )| = Φ , ∈P to the issues originating from 8 in a way that corresponds ei- 0, for a given agenda Φ and a given profile P is NP-hard, even for ther (a) to a truth assignment to the variables in k8 witnessing the case where Φ consists of logically independent issues. Mem- that k8 is 1-in-3-satisfiable, or (b) to a partial truth assignment V : bership in NP follows from the fact that one can guess a set ∗ ( )\{ }→{ } var k8 G1 0, 1 that satisfies exactly one literal in each (together with a truth assignment witnessing that it is consistent) ¬ clause where G1 or G1 occurs, and satisfies no literal in each clause in polynomial time, after which checking whether ∗ ∈ J(Φ)—i.e., where G1 or ¬G1 occurs. If this were not the case, we could consider ∗ ′ ∗ whether it is indeed consistent—and checking whether max ′, ′′ ∈P | ( , )− another instead that does satisfy these properties and that has a ∗ ′′ =  ∗ ′ ∗ ′′  ( , )| 0 can then be done in polynomial time. value of max ′, ′′ ∈P | ( , )−  ( , )| that is at least as small. Then, by the construction of P —and the profiles P8 used in this REFERENCES ′ ∗ ′ = construction—for each 8,9 it holds that  ( , 8,9 ) Í1≤8 ≤E 28 (= + [1] Georgios Amanatidis, Georgios Birmpas, and Evangelos Markakis. 2016. On 1)+=±38, where 38 = 0 ifk8 is 1-in-3-satisfiable and 38 = (E −8) oth- Truthful Mechanisms for Maximin Share Allocations. In Proceedings of the 25th ∗ ′ International Joint Conference on Artificial Intelligence (IJCAI). erwise. From this it follows that the value of max ′, ′′ ∈P | ( , )− ∗ ′′ [2] Haris Aziz, Markus Brill, Vincent Conitzer, Edith Elkind, Rupert Freeman, and  ( , )| is strictly less than 2E, since k1 is 1-in-3-satisfiable. ⊣ Toby Walsh. 2017. Justified Representation in Approval-based Committee Vot- ing. Social Choice and Welfare 48, 2 (2017), 461–485. By Claims 7 and 8, and because we may assume without loss of [3] Salvador Barberà, Walter Bossert, and Prasanta K Pattanaik. 2004. Ranking Sets generality that = ≥ E, we know that any judgment set ∗ ∈ J(Φ) of Objects. In Handbook of Utility Theory. Springer, 893–977. ∗ ′ ∗ ′′ Γ [4] Seth D. Baum. 2017. Social Choice Ethics in Artificial Intelligence. AI & Society that minimizes max ′, ′′ ∈P | ( , )−  ( , )| must satisfy 2. (2017), 1–12. Moreover, by a straightforward modification of the arguments in [5] Dorothea Baumeister, Gábor Erdélyi, Olivia Johanna Erdélyi, and Jörg Rothe. the proof of Claim 8, we know that the minimal value, over judg- 2013. Computational aspects of manipulation and control in judgment aggrega- ∗ ∗ ′ ∗ ′′ tion. In Proceedings of the 3rd International Conference on Algorithmic Decision ment sets ∈ J(Φ), of max ′, ′′ ∈P | ( , )− ( , )| is 2(E − Theory (ADT). 8) for the smallest value of 8 such that k8 is not 1-in-3-satisfiable, [6] Dorothea Baumeister, Gábor Erdélyi, Olivia J Erdélyi, and Jörg Rothe. 2015. Complexity of Manipulation and Bribery in Judgment Aggregation for Uniform which coincides with 2(E + 1 − 8) for the largest value of 8 such Premise-Based Quota Rules. Mathematical Social Sciences 76 (2015), 19–30. that k8 is 1-in-3-satisfiable. Since E is even (and thus E + 1 is odd), [7] Dorothea Baumeister, Jörg Rothe, and Ann-Kathrin Selker. 2017. Strategic Be- we know that 2(E + 1 − 8) is divisible by 4 if and only if 8 is odd. havior in Judgment Aggregation. In Trends in Computational Social Choice. Lulu. ∗ ′ ∗ ′′ com, 145–168. Therefore, the minimal value of max ′, ′′ ∈P | ( , )−  ( , )| [8] Sirin Botan and Ulle Endriss. 2020. Majority-Strategyproofness in Judgment is divisible by 4 if and only if the maximum number of variables Aggregation. In Proceedings of the 19th International Conference on Autonomous set to true in any satisfying assignment of k is odd. Agents and Multiagent Systems (AAMAS). ∗ [9] Steven J Brams, Michael A Jones, and Christian Klamler. 2008. Proportional Pie- Moreover, it is straightforward to show that from any ∈ Cutting. International Journal of Game Theory 36, 3-4 (2008), 353–367. MaxEq(Φ) we can construct, in polynomial time, a satisfying as- [10] Steven J Brams, D Marc Kilgour, and M Remzi Sanver. 2007. A Minimax Proce- signment of k that sets a maximum number of variables to true. dure for Electing Committees. Public Choice 132, 3 (2007), 401–420. [11] Eric Budish. 2011. The Combinatorial Assignment Problem: Approximate Com- [38] Umberto Grandi and Ulle Endriss. 2010. Lifting Rationality Assumptions in Bi- petitive Equilibrium from Equal Incomes. Journal of Political Economy 119, 6 nary Aggregation. In Proceedings of the 24th AAAI Conference on Artificial Intel- (2011), 1061–1103. ligence (AAAI). [12] Ioannis Caragiannis, Dimitris Kalaitzis, and Evangelos Markakis. 2010. Approx- [39] Davide Grossi and Gabriella Pigozzi. 2014. Judgment Aggregation: A Primer. Syn- imation algorithms and mechanism design for minimax approval voting. In Pro- thesis Lectures on Artificial Intelligence and Machine Learning, Vol. 8. Morgan ceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI), Vol. 24. & Claypool Publishers. 1–151 pages. [13] Yiling Chen, John K Lai, David C Parkes, and Ariel D Procaccia. 2013. Truth, [40] Ronald de Haan. 2018. Hunting for Tractable Languages for Judgment Aggre- Justice, and Cake Cutting. Games and Economic Behavior 77, 1 (2013), 284–297. gation. In Proceedings of the 16th International Conference on the Principles of [14] Zhi-Zhong Chen and Seinosuke Toda. 1995. The Complexity of Selecting Maxi- Knowledge Representation and Reasoning (KR). mal Solutions. Information and Computation 119 (1995), 231–239. Issue 2. [41] Ronald de Haan and Marija Slavkovik. 2017. Complexity Results for Aggregating [15] Vincent Conitzer, Walter Sinnott-Armstrong, Jana Schaich Borg, Yuan Deng, Judgments using Scoring or Distance-Based Procedures. In Proceedings of the and Max Kramer. 2017. Moral Decision Making Frameworks for Artificial In- 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS. telligence. In Proceedings of the 31st AAAI Conference on Artificial Intelligence [42] Ronald de Haan and Marija Slavkovik. 2019. Answer Set Programming for Judg- (AAAI). ment Aggregation. In Proceedings of the 28th International Joint Conference on [16] Stephen A. Cook. 1971. The Complexity of Theorem-Proving Procedures. In Artificial Intelligence (IJCAI). Proceedings of the 3rd Annual ACM Symposium on Theory of Computing. Shaker [43] Jerry S Kelly. 1977. Strategy-proofness and Social Choice Functions without Heights, Ohio, 151–158. Singlevaluedness. Econometrica 45, 2 (1977), 439–446. [17] H Dalton. 1920. The Measurement of the Inequality of Incomes. Economic Jour- [44] Johannes Köbler and Thomas Thierauf. 1990. Complexity Classes with Advice. nal 30, 119 (1920), 348–461. In Proceedings of the 5th Annual Structure in Complexity Theory Conference. [18] Franz Dietrich and Christian List. 2007. Strategy-proof Judgment Aggregation. [45] Sébastien Konieczny and Ramón Pino Pérez. 2011. Logic Based Merging. Journal Economics & Philosophy 23, 3 (2007), 269–300. of Philosophical Logic 40, 2 (2011), 239–270. [19] John Duggan and Thomas Schwartz. 2000. Strategic Manipulability without Res- [46] Jan Krajicek. 1995. Bounded arithmetic, propositional logic and complexity theory. oluteness or Shared Beliefs: Gibbard-Satterthwaite Generalized. Social Choice Cambridge University Press. and Welfare 17, 1 (2000), 85–93. [47] Mark W. Krentel. 1988. The Complexity of Optimization Problems. J. Comput. [20] Michael Dummet. 1984. Voting Procedures. Oxford University Press. System Sci. 36, 3 (1988), 490–509. [21] Thomas Eiter and Georg Gottlob. 1995. On the Computational Cost of Disjunc- [48] Justin Kruger and Zoi Terzopoulou. 2020. Strategic Manipulation with Incom- tive Logic Programming: Propositional Case. Annals of Mathematics and Artiffi- plete Preferences: Possibilities and Impossibilities for Positional Scoring Rules. cial Intelligence 15, 3-4 (1995), 289–323. In Proceedings of the 19th International Conference on Autonomous Agents and [22] Edith Elkind, Piotr Faliszewski, Piotr Skowron, and Arkadii Slinko. 2017. Proper- Multiagent Systems (AAMAS). ties of Multiwinner Voting Rules. Social Choice and Welfare 48, 3 (2017), 599–632. [49] Martin Lackner and Piotr Skowron. 2018. Approval-Based Multi-Winner Rules [23] Ulle Endriss. 2016. Judgment Aggregation. In Handbook of Computational Social and Strategic Voting. In Proceedings of the 27th International Joint Conference on Choice, F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia (Eds.). Artificial Intelligence (IJCAI). Cambridge University Press. [50] Jérôme Lang, Gabriella Pigozzi, Marija Slavkovik, and Leendert van der Torre. [24] Ulle Endriss. 2016. Judgment Aggregation. In Handbook of Computational Social 2011. Judgment Aggregation Rules Based on Minimization. In Proceedings of the Choice, F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia (Eds.). 13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK). Cambridge University Press. [51] Jérôme Lang and Marija Slavkovik. 2014. How Hard is it to Compute Majority- [25] Ulle Endriss and Ronald de Haan. 2015. Complexity of the Winner Determi- Preserving Judgment Aggregation Rules?. In Proceedings of the 21st European nation Problem in Judgment Aggregation: Kemeny, Slater, Tideman, Young. In Conference on Artificial Intelligence (ECAI). Proceedings of the 14th International Conference on Autonomous Agents and Mul- [52] Christian List and Philip Pettit. 2002. Aggregating Sets of Judgments: An Impos- tiagent Systems (AAMAS). sibility Result. Economics and Philosophy 18, 1 (2002), 89–110. [26] Ulle Endriss, Ronald de Haan, Jérôme Lang, and Marija Slavkovik. 2020. The [53] Michael K. Miller and Daniel Osherson. 2009. Methods for Distance-based Judg- Complexity Landscape of Outcome Determination in Judgment Aggregation. ment Aggregation. Social Choice and Welfare 32, 4 (2009), 575–601. Journal of Artificial Intelligence Research (2020). [54] Elchanan Mossel and Omer Tamuz. 2010. Truthful . In Proceedings [27] Ulle Endriss, Umberto Grandi, Ronald de Haan, and Jérôme Lang. 2016. Succinct- of the 3rd International Symposium on Algorithmic Game Theory (SAGT). ness of Languages for Judgment Aggregation. In Proceedings of the 15th Interna- [55] Hervé Moulin. 1988. Axioms of Cooperative Decision Making. Econometric Soci- tional Conference on the Principles of Knowledge Representation and Reasoning ety Monographs, Vol. 15. Cambridge University Press. (KR). [56] Klaus Nehring, Marcus Pivato, and Clemens Puppe. 2014. The Condorcet Set: [28] Ulle Endriss, Umberto Grandi, and Daniele Porello. 2012. Complexity of Judg- Majority Voting over Interconnected Propositions. Journal of Economic Theory ment Aggregation. Journal of Artificial Intelligence Research 45 (2012), 481–514. 151 (2014), 268–303. [29] Patricia Everaere,SébastienKonieczny, and Pierre Marquis. 2014. On Egalitarian [57] Ritesh Noothigattu, Snehalkumar S Gaikwad, Edmond Awad, Sohan Dsouza, Belief Merging. In Proceedings of the 14th International Conference on the Princi- Iyad Rahwan, Pradeep Ravikumar, and Ariel D Procaccia. 2018. A Voting-Based ples of Knowledge Representation and Reasoning (KR). System for Ethical Decision Making. In Proceedings of the 32nd AAAI Conference [30] PatriciaEveraere,SébastienKonieczny, and PierreMarquis. 2017. Belief Merging on Artificial Intelligence (AAAI). and its Links with Judgment Aggregation. In Trends in Computational Social [58] Dominik Peters. 2018. Proportionality and Strategyproofness in Multiwinner Choice, Ulle Endriss (Ed.). AI Access Foundation, 123–143. Elections. In Proceedings of the 17th International Conference on Autonomous [31] Peter C Fishburn and Steven J Brams. 1983. Paradoxes of Preferential Voting. Agents and Multiagent Systems (AAMAS). Mathematics Magazine 56, 4 (1983), 207–214. [59] Gabriella Pigozzi. 2006. Belief Merging and the Discursive Dilemma: An [32] Duncan K Foley. 1967. Resource Allocation and the Public Sector. Yale Economic Argument-based Account to Paradoxes of Judgment Aggregation. Synthese 152, Essays 7, 1 (1967), 45–98. 2 (2006), 285–298. [33] Peter Gärdenfors. 1976. Manipulation of Social Choice Functions. Journal of [60] John Rawls. 1971. A Theory of Justice. Belknap Press. Economic Theory 13, 2 (1976), 217–228. [61] M Remzi Sanver and William S Zwicker. 2009. One-way Monotonicity as a Form [34] Martin Gebser, Roland Kaminski, Benjamin Kaufmann, and Torsten Schaub. of Strategy-proofness. International Journal of Game Theory 38, 4 (2009), 553– 2012. Answer Set Solving in Practice. Morgan & Claypool Publishers. 574. [35] Martin Gebser, Roland Kaminski, and Torsten Schaub. 2011. Complex Optimiza- [62] Thomas J. Schaefer. 1978. The complexity of satisfiability problems. In Confer- tion in Answer Set Programming. Theory and Practics of Logic Programming 11, ence Record of the 10th Annual ACM Symposium on Theory of Computing. ACM. 4-5 (2011), 821–839. [63] . 1997. Choice, Welfare and Measurement. HarvardUniversity Press. [36] Michael Gelfond. 2006. Answer Sets. In Handbook of Knowledge Representation, [64] Zoi Terzopoulou and Ulle Endriss. 2018. Modelling Iterative Judgment Aggrega- Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter (Eds.). Elsevier. tion. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI). [37] Oded Goldreich. 2010. P, NP, and NP-Completeness: The Basics of Complexity [65] Klaus W. Wagner. 1990. Bounded Query Classes. SIAM J. Comput. 19, 5 (1990), Theory. Cambridge University Press. 833–846. [66] William S. Zwicker. 2016. Introduction to the Theory of Voting. In Handbook of Computational Social Choice, F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia (Eds.). Cambridge University Press.