Dynamic Volunteer's Dilemmas, Unique Bid , and Discrete Bottleneck Games: Theory and Experiments

Item Type text; Electronic Dissertation

Authors Otsubo, Hironori

Publisher The University of Arizona.

Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.

Download date 23/09/2021 17:28:27

Link to Item http://hdl.handle.net/10150/194255 DYNAMIC VOLUNTEER’S DILEMMAS, UNIQUE BID AUCTIONS, AND DISCRETE BOTTLENECK GAMES: THEORY AND EXPERIMENTS

by

Hironori Otsubo

______Copyright © Hironori Otsubo 2008

A Dissertation Submitted to the Faculty of the

DEPARTMENT OF ECONOMICS

In Partial Fulfillment of the Requirements For the Degree of

DOCTOR OF PHILOSOPHY

In the Graduate College

THE UNIVERSITY OF ARIZONA

2008 2

THE UNIVERSITY OF ARIZONA GRADUATE COLLEGE

As members of the Dissertation Committee, we certify that we have read the dissertation prepared by Hironori Otsubo entitled Dynamic Volunteer’s Dilemmas, Unique Bid Auctions, and Discrete Bottleneck Games: Theory and Experiments and recommend that it be accepted as fulfilling the dissertation requirement for the Degree of Doctor of Philosophy

______Date:6/23/08 Amnon Rapoport

______Date:6/23/08 Martin Dufwenberg

______Date:6/23/08 John Wooders

Final approval and acceptance of this dissertation is contingent upon the candidate’s submission of the final copies of the dissertation to the Graduate College.

I hereby certify that I have read this dissertation prepared under my direction and recommend that it be accepted as fulfilling the dissertation requirement.

______Date:6/23/08 Dissertation Director: Amnon Rapoport

3

STATEMENT BY AUTHOR

This dissertation has been submitted in partial fulfillment of requirements for an advanced degree at the University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library.

Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledgement of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the copyright holder.

SIGNED: Hironori Otsubo 4

ACKNOWLEDGEMENTS

In reflecting on all the people whom I wish to thank, I just realize how fortunate I am to have them shaping my life. I could not have come this far without their generous support. First and foremost, I express my deepest appreciation to my dissertation advisor who is the co-author of three essays extracted from my dissertation. In his guidance, patience, genuine caring and concern, and unconditional trust in me, Amnon Rapoport has served not only as a mentor but also as a friend over the past four years. His office door has always been open to me not only for discussing new ideas but also for sharing personal experiences. My education would have been seriously incomplete without his unwavering support. I wish to repay my debt of gratitude to him by becoming a full- fledged experimental economist in the future. I am also deeply indebted to the two other dissertation committee members, namely Martin Dufwenberg and John Wooders, for their excellent guidance and invaluable inputs into earlier drafts of my dissertation. Martin has exhibited a boundless enthusiasm as a teacher, researcher, and dissertation committee member. John’s first-year course has lured me in the direction of my current research interests. Conversations with Martin and John have always been beneficial and enjoyable. My special gratitude goes to William E. Stein and Bora Kim for their assistance as co-authors of the second essay in my dissertation, Eyran Gisches for software development, and Maya Rosenblatt for her outstanding assistance in data collection. It has been a great privilege to spend the past six years at the Department of Economics. Its faculty members, administrative staff, and graduate students have always been supportive and caring. I’d like to express my sincere gratitude to all of them. I have been blessed to cross paths with many dedicated teachers throughout my life. In particular, I am sincerely grateful to Ichiro Takahashi, professor of Soka University in Japan, who is greatly responsible for my determined decision to pursue a Ph.D. in Economics in the United States. No matter how serious my situation was, he has always remained my biggest supporter. I cannot thank him enough. I gratefully acknowledge the financial support for the research projects examined in my dissertation: the project in Chapter 1 was financially supported by the Economic Science Laboratory at the University of Arizona, and the other two projects in Chapters 2 and 3 by a contract F49620-03-1-0377 from the AFSOR/MURI to the University of Arizona. Finally, and most importantly, I would like to express my sincere appreciation and thanks to my parents, Kazue and Hiroyuki, for their unfailing support. Their unconditional love and boundless devotion to me was the reason why I have never given up. 5

DEDICATION

To my lifelong mentor, Dr. Daisaku Ikeda .

6

TABLE OF CONTENTS

ABSTRACT ...... 8

INTRODUCTION ...... 10

CHAPTER 1: DYNAMIC VOLUNTEER’S DILEMMAS...... 16 1.1 Introduction ...... 16 1.2 Dynamic Volunteer’s Dilemma Game ...... 19 1.2.1 Model ...... 19 1.2.2 Equilibrium Analysis ...... 21 1.3 Research Questions and Experimental Design ...... 23 1.3.1 Research Questions...... 23 1.3.2 Experimental Design...... 25 1.4 Results ...... 28 1.4.1 Four Major Findings ...... 28 1.4.2 Other Findings ...... 34 1.5 Conclusion ...... 35

CHAPTER 2: UNIQUE BID AUCTIONS ...... 39 2.1 Introduction ...... 39 2.2 Previous Literature ...... 41 2.3 Equilibrium Solutions ...... 44 2.3.1 LUBA and HUBA...... 45 2.3.2 Asymmetric Pure-strategy Equilibria ...... 45 2.3.3 Symmetric Mixed-strategy Equilibrium ...... 46 2.4 Alternative Implementations ...... 51 2.5 Experimental Design ...... 53 2.6 Results ...... 55 2.6.1 Aggregate/Group Level Results...... 55 2.6.2 Individual Level Results ...... 57 2.6.3 Discussion...... 64 2.7 Conclusion ...... 65

CHAPTER 3: DISCRETE BOTTLENCK GAMES ...... 68 3.1 Introduction ...... 68 3.2 Vickrey’s Continuous Bottleneck Model ...... 72 3.3 Review of Previous Literature ...... 75 3.4 Discrete Bottleneck Game ...... 77 7

TABLE OF CONTENTS - Continued

3.4.1 Model ...... 77 3.4.2 Computational Procedure...... 79 3.4.3 Comparison with Ziegelmeyer et al...... 89 3.5 Comparison with Vickrey’s Continuous Model ...... 91 3.5.1 Changing Service Capacity...... 92 3.5.2 Changing the Number of Players...... 93 3.6 Extensions ...... 94 3.6.1 Random Number of Players...... 94 3.6.2 Augmented Strategy Space...... 97 3.7 Conclusion ...... 100

APPENDIX A: FIGURES ...... 103

APPENDIX B: TABLES ...... 132

APPENDIX C: INSTRUCTIONS ...... 152

REFERENCES ...... 161 8

ABSTRACT

The main theme of my dissertation is the analysis of several interactive decision making situations with multiple decision makers whose interests do not fully coincide.

Non-cooperative game theory is invoked to carry on this analysis.

The first chapter describes an experimental study of volunteer’s dilemmas that evolve over time. Only a single volunteer is required for the public good to be provided.

Because volunteering is costly, each prefers that some other players bear the full costs of volunteering. Reflecting on the observation that in many naturally occurring social dilemmas it is beneficial to volunteer earlier than later, I assume that the payoff to the volunteer and the (higher) payoff to each of the non-volunteers decrease monotonically over time. I derive symmetric and asymmetric subgame perfect equilibria. The experimental results provide little support to asymmetric equilibria in which only a single subject volunteers immediately. In comparison to the symmetric subgame perfect equilibrium, they show that subjects volunteer, on average, earlier than predicted.

The second chapter explores a new type of online , called the unique bid

auction , that has recently emerged on the Internet and gained widespread popularity in

many countries. In a sharp contrast to traditional auctions, the winner in this class of

auctions is the bidder who submits the lowest (highest) unique bid; all ties are discarded.

I propose an algorithm to numerically compute the symmetric mixed-strategy Nash

equilibrium solution and then conduct a series of experiments to assess the predictive

power of the equilibrium solution. The experimental results show that the solution 9 accounts quite well for the subjects’ behavior on the aggregate level, but not on the individual level.

The last chapter proposes a discrete version of William Vickrey’s model of traffic congestion on a single road with a single bottleneck. In my model, both the strategy space and number of commuters are finite. An algorithm similar to the one used in the second chapter is proposed to numerically calculate the symmetric mixed-strategy . The discrete model is then compared with the original continuous model of

Vickrey in terms of the equilibrium solution and its implications. 10

INTRODUCTION

“... although I believe that in the history of science it is always the theory

and not the experiment, always the idea and not the observation, which

opens up the way to new knowledge, I also believe that it is always the

experiment which saves us from following a track that leads nowhere:

which helps us out of the rut, and which challenges us to find a new way.”

Karl R. Popper, the Logic of Scientific Discovery, 1959

In reflecting on a widespread use of game theory, an important question is how well the theory accounts for the actual behavior of decision makers in disparate interactive situations. Game theoretic models require very sophisticated and restrictive rationality assumptions: each agent forms beliefs (or expectations) of what the others might do (i.e., strategic thinking), and given these beliefs each agent chooses an optimal decision from her possible alternatives (i.e., optimization). It is now widely accepted that not every agent behaves rationally in complex contexts due to limited cognitive capabilities; either or both of these assumptions may be violated. The question is of great importance since the applications of game theory are clearly based on the implicit assumption that the theory has predictive value. There is little interest in a game theory, however elegant it may be, that does not have positive predictive power (Davis and Holt,

1993). 11

My attempt in this dissertation is to answer this question with the help of experimental methods. My motivation for this approach is by no means to refute the power of game theory. Rather, I wish to improve it by identifying systematic behavioral regularities (e.g., deviations from equilibrium), if any, from which we may glean for future development of an alternative theory that better accounts for the actual behavior of decision makers. The interplay between game theory and experiments does not only serve to throw light on the predictive power of equilibrium solutions, but also to establish behavioral regularities, which, in turn, will be distilled into new theory.

The theories that this dissertation subjects to experimental testing come from non- cooperative game theory (Nash, 1951). My dissertation consists of three independent topics, each of which falls within this framework: Dynamic Volunteer’s Dilemmas

(Chapter 1), Unique Bid Auctions (Chapter 2), and Discrete Bottleneck Game (Chapter 3).

Common to all the three topics is the construction of equilibrium solutions. The first two chapters also include data collected in laboratory experiments to assess the predictive power of these equilibrium solutions.

Chapter 1 focuses on a model called the dynamic volunteers’ dilemma game , which is formulated as a non-cooperative n-person game in extensive form with symmetric players, discrete time, finite horizon, and complete information. The model is motivated by many social dilemma situations where (i) people have to decide independently whether to volunteer for a costly action that requires only a single individual to accomplish it, (ii) the decision takes place in real time so potential 12 volunteers can observe each other’s behavior and wait for one of the others to act, and

(iii) delay is costly both for volunteers and non-volunteers.

Players in the dynamic volunteer’s dilemma game face the problem of equilibrium selection; the game possesses asymmetric subgame perfect equilibria in which a single player volunteers immediately, and a symmetric subgame perfect equilibrium (in behavioral strategies) in which each of n identical players fully

randomizes over her two actions, volunteer or not, at each time period. In each of the

asymmetric equilibria, the volunteer receives a lower payoff than each of the n-1 non- volunteers. Although these asymmetric equilibria are socially optimal, this payoff asymmetry may hamper a tacit coordination on any of them. In the absence of pre-play communication or sufficient opportunities of learning, players may fail to find a volunteer and thereby fail to coordinate their actions on a specific asymmetric equilibrium. In the symmetric environment, the symmetric equilibrium, in which each of the players receives the same expected payoff, is more appealing as a predictor.

Experimental results consist of several findings. First, as implied by the symmetric equilibrium, a lower cost of volunteering elicits earlier termination of the game. Second, subjects largely fail to achieve one of the asymmetric equilibria but on average they receive a better outcome than they would have ended up with under the symmetric equilibrium. Third, subjects revealed heterogeneous propensity to volunteer.

For about 50 percent of the subjects, the hypothesis about the expected frequency of volunteering decisions cannot be rejected. The other 50 percent subjects are divided 13 between “free riders,” who volunteer significantly fewer times than predicted, and “hard core” cooperators, who volunteer significantly more often than predicted.

Chapter 2 explores a new type of on-line auction, called the unique bid auction

(UBA), that recently has emerged on the Internet and gained widespread popularity in many countries. A new and major feature of this type of auction that sharply differentiates it from any other auctions is the uniqueness of the winning bid. In the lowest unique bid auction (LUBA), the winner of a prize is the one who submits the lowest unique bid. On the other hand, in the highest unique bid auction (HUBA), the winner is the one who submits the highest unique bid. Therefore, all ties are discarded, and with a positive probability the auction may terminate with no winner.

The introduction of this new feature renders the derivation of equilibrium solutions in closed-form extremely challenging. I propose an algorithmic procedure that uses non-stationary Markov chains to numerically compute the symmetric mixed-strategy

Nash equilibrium solution. To keep the model tractable, the number of bidders is assumed commonly known and each bidder is allowed to only submit a single bid.

Results from laboratory experiments indicate that the equilibrium solutions do not account well for the bidding behavior on the individual level in both the unique lowest and unique highest bid auctions. This is partly due to the inclination of some subjects to choose the same bid repeatedly over iterations of the auction. On the other hand, the behavior on both the aggregate and group levels is accounted for quite well by the equilibrium solution in the lowest unique bid auction but not in the highest unique bid auction. However, with experience, the bidding behavior on the aggregate and group 14 levels move in the direction of equilibrium in the last 30 rounds of the highest unique bid auction.

In Chapter 3, I propose a discrete version of William Vickrey’s (1969) bottleneck model, which analyzes urban traffic congestion on a single road caused by a single bottleneck with a finite capacity. My motivation for discretizing Vickrey’s continuous model is twofold. First, many researchers have followed a common practice in transportation science, economics, and other disciplines that uses continuous models (i.e., a continuum of players and continuous strategy space) to analyze phenomena that are essentially discrete. I argue that in some cases the predictions derived from the continuous model may not provide good approximations to phenomena that are discrete in nature. This is the case when congestion involves only a few decision makers such as ships seeking to use canal locks, and passengers queueing at airport security gates. I wish to compare the two formulations in order to determine how good the approximations are to the associated discrete case. Second, I wish to develop a model to account for the experimental implementation of the bottleneck game in the laboratory. Whether in an experiment or even naturally occurring settings, in order to implement a mechanism or test a theory one has to use a discrete strategy space and a finite number of players.

Moreover, as the number of participants in experiments and in some traffic congestion applications is typically small, the approximations provided by the continuous model may fail.

A variant of the algorithmic procedure that is used in Chapter 2 is proposed to numerically compute the symmetric mixed-strategy Nash equilibrium solution to the 15 discrete bottleneck game with a finite number of players and discrete strategy space. The discrete bottleneck game is formulated as a game of complete information in which the number of players, strategy space, and cost structure are commonly known.

Extensive numerical computations indicate the presence of systematic discrepancies when the continuous model is used to account for departure times in traffic networks in which the number of players is relatively small. The computations further show that as the population size grows the difference between the continuous and discrete models of traffic congestion caused by a bottleneck can safely be ignored.

To bridge the gap between theory and practice in traffic networks, two extensions

of the discrete model are proposed. The first extension concerns the case where the

number of commuters is a random variable, rather than a constant, whose distribution is

commonly known. The second extension deals with the case in which an alternative

transportation mode not subject to congestion is available.

Each chapter is self-contained. Therefore, no general conclusion is provided at the

end of the dissertation. Instead, conclusions and future research associated with each

topic are discussed at the end of each chapter.

16

CHAPTER 1: DYNAMIC VOLUNTEER’S DILEMMAS

1.1 Introduction

There is by now a voluminous body of research in public economics on volunteer’s dilemmas in which one or more group members are required to make costly contributions in order to provide benefit to all the group members. 1 Closely associated with research on the war of attrition, which has been studied in economics and biology, this research has recently been extended from static to dynamic volunteer’s dilemmas that evolve over time. 2 My interest here is in the class of n-person dynamic volunteer’s

dilemmas where only a single contributor, called volunteer , is required for the public

good to be provided, and the value of the public good is commonly known to diminish

over time. Participants in these social dilemmas have to decide whether to volunteer and

if so, at what time to do so. Variants of the dynamic volunteers’ dilemma game have been

studied theoretically by several researchers (e.g., Bilodeau and Slivinski, 1996; Bliss and

Nalebuff, 1984; Hendricks et al., 1988; Shapira and Eshel, 2000; Weesie, 1993, 1994).

The models that have been proposed differ from one another depending on whether the

game is formulated in a strategic or extensive form, time is continuous or discrete, the

time horizon is finite or infinite, the information provided to the group members is

1 For the original volunteer’s dilemma game and its extension, see Diekmann (1985, 1993).

2 The problem of finding someone who incurs the full cost of providing public goods in the context of real- time decision making bears a resemblance to war of attrition. The war of attrition was intensively studied in biology, where Maynard Smith (1974) originally posed it as a conflict between two animals fighting over pray. Later on, it has widely been applied to economic analyses of, for example, firm exit from a duopoly market (Fudenberg and Tirole, 1986), an oligopoly market (Bulow and Klemperer, 1999), and many others. 17 complete or incomplete, and players are either symmetric or asymmetric. The equilibrium solutions to these models differ correspondingly.

Examples that have motivated these models include public services such as getting up at night to quiet a crying baby, cleaning public toilets (Bilodeau and Slivinski,

1996), and chairing a university department. Quoting Dawkins (1976), Shapira and Eshel give an example of masses of emperor penguins standing on the brink of the water and hesitating before jumping in because of the danger of falling prey to seals. At least one of them has to dive in for the rest to know whether there is a seal lurking in the water. Yet another example from biology can be found in the behavior of a group of foraging animals (e.g., marmots) in which one of them occasionally looks up, checks for a predator, and issues an alarm thereby increasing the risk of attracting the predator. The most infamous and disturbing example (see, e.g., Weesie, 1993, 1994) is the 1964 Kitty

Genovese murder case in which the victim was sexually assaulted and stabbed to death in the courtyard of her apartment complex in the city of New York. Despite the fact that 38 people were watching the brutal attack from behind their windows, no spectator volunteered to help. In fact, the police was not called until the attack was over. Common to all of these examples is that the utility each group member derives from the provision of the collective good—stopping a baby from crying at night, issuing an alarm, or calling the police—diminishes over time.

In contrast to a large body of theoretical studies on dynamic volunteer’s dilemmas and war of attrition games, little experimental work has been conducted to answer the question how people actually behave in such situations. One exception is Bilodeau et al. 18

(2004) in which they tested the predictive power of a unique subgame perfect equilibrium to their asymmetric dynamic volunteer’s dilemma game. 3 I focus on a class of dynamic

volunteer’s dilemma games, which are formulated in an extensive form, with n

symmetric players, discrete time, finite horizon, and complete information in which the

payoffs to all the group members decrease over time. I first construct symmetric and

asymmetric subgame perfect equilibria for the game and then test them experimentally in

an attempt to observe and identify systematic behavioral regularities, if any, of financially

motivated subjects in a controlled laboratory environment.

The experimental results show several major behavioral patterns. First, as the cost

of volunteering is exogenously decreased, at least one volunteer emerges earlier. Second,

the results show no coordination on one of the multiple socially optimal asymmetric

equilibria in which a single subject volunteers at time period t=0. Only a minority of the

games played in the experiment support the asymmetric equilibria. Third, the data also

show systematic deviations from the symmetric subgame perfect equilibrium (SSPE) in

which each player should fully randomize her actions at each point in time. Subjects

volunteered, on average, significantly earlier than predicted. These deviations are

attributed to the heterogeneity of the subjects in terms of their inclination to free ride.

Substantial and consistent differences between subjects are observed with some subjects

exhibiting a strong inclination to volunteer with little or no regard to the free riding of

their cohort members, some subjects who never volunteer across all the 50 iterations of

the base game, and yet other subjects who adhere to equilibrium play.

3 See also Oprea et al. (2008) and Phillips and Mason (1997). 19

The rest of this chapter is organized as follows. In Section 1.2, I introduce

notation, formally describe the game, and then construct the symmetric and asymmetric

equilibria. Section 1.3 lists the research questions that I attempt to answer and describes

the design of my experiment. Section 1.4 states the results, and Section 1.5 concludes this

chapter with a brief discussion.

1.2 Dynamic Volunteer’s Dilemma Game

1.2.1 Model

The dynamic volunteer’s dilemma game examined in this chapter is a stylized model that depicts a situation in which each player attempts to free ride on the efforts of some other member of her group in a dynamic environment. There is a group of n symmetric, risk-neutral players. Time is discrete and finite: there are T+1 time periods, 0,

1, …, T. At each time period, the players are asked to decide independently and anonymously whether to contribute to the provision of a public benefit. The game starts at period t=0 and terminates either when at least one of the n players contributes at time

period t≤T or when time period T ends with no contributor, whichever occurs first. The first player who decides to unilaterally contribute to the public good is called a volunteer .

Each player may volunteer at most once; players may also opt not to volunteer at all.

Denote by Ht and Lt the respective payoffs of the non-volunteer and volunteer at time period t ∈ 1,0{ ,..., T} . The two payoff functions satisfy the following assumptions:

> ∈ 1. Ht Lt for all t 1,0{ ,..., T } .

2. Both Ht and Lt are strictly decreasing in t. 20

3. If time period T ends with no volunteer, then each of the n players receives a fixed

payoff ε ( ε≥0), which is strictly smaller than LT.

The first assumption implies that for any stage of the game, volunteering is costly.

Hence, each player prefers to have someone else volunteer at any time period. The second assumption reflects the observation that delays in volunteering are costly to all the n players, and that the delay costs increase in time. The third assumption is necessary because the game has a finite horizon. A finite horizon implies that the game may terminate with no volunteer. Therefore, when the game ends with no volunteer payoffs to all the n players must be specified.

The assumption of discrete time requires defining the volunteer’s and non- volunteer’s payoff functions for the case of multiple volunteers at the same time period.

Under certain circumstances, it is reasonable to assume that the costs of volunteering are equally shared among the multiple volunteers (Weesie and Franzen, 1998). For example, if several penguins simultaneously dive in to check where there is a seal, then the chances that a given penguin is caught by seals decrease with the number volunteering penguins diving into the ocean. Therefore, emperor penguins could share the cost of volunteering.

An alternative assumption was employed:

4. If multiple players volunteer simultaneously at period t, then each of them

receives Lt and all the others receive Ht.

This assumption is reasonable in cases where no matter how many players volunteer, each of them has to incur the full cost of volunteering. For example, if several bystanders jump into a river simultaneously to rescue a drowning child, all of them risk their lives as 21 much as when only one of them jumps in. If several witnesses simultaneously call the police in case of a crime, then all of them have to be called in to testify. Hence, in such

4 situations, each of the multiple volunteers receives Lt.

1.2.2 Equilibrium Analysis

This game has n asymmetric subgame perfect equilibria in which a single player volunteers at time period t= 0. 5 Each of these equilibria is socially optimal because it maximizes the group’s welfare. Such equilibria yield asymmetric payoffs; the volunteer receives a lower payoff of L0 and each of the n-1 non-volunteers a higher payoff of H0.

There also exists a unique symmetric subgame perfect equilibrium (SSPE), in which each

player fully randomizes over her actions (i.e., volunteer or do not volunteer) at each time

period. In a sharp contrast to the first type of the equilibria, the SSPE yields the same

equilibrium payoff to all the players.

To derive the SSPE, backward induction is invoked. Consider the subgame

starting at time period t=T. Table 1.1 illustrates the payoff matrix of this subgame. This

σ subgame has a unique symmetric Nash equilibrium in mixed strategies. Let T denote the probability that each player volunteers at time period T. This probability is determined in such a way that this player is indifferent in terms of her expected payoff between volunteering and not volunteering. In other words,

4 This assumption is not necessary to derive an equilibrium solution. However, it allows me to derive a closed-form solution.

5 More precisely, at any time period in such equilibria, one of the n players volunteers whereas each of the other n-1 players does not. 22

= ε −σ n−1 + − −σ n−1 6 LT 1( T ) H T 1[ 1( T ) ].

Then, the equilibrium probability of volunteering at time period t=T is

1  H − L  n−1 σ = 1−  T T  . T  − ε   H T 

Each player’s equilibrium payoff in this subgame is LT.

The same procedure is now repeated backward until the initial time period t=0.

σ Let t denote the probability that each player volunteers at time period t. Then, the

equilibrium probability of volunteering at time period t is

1  H − L  n−1 σ = 1−  t t  . t  −   H t Lt+1 

Each player’s equilibrium payoff in this subgame is Lt. Then, the (behavioral) strategy

σ = σ N σ = σ σ profile { i }i=1 , where i ( 1 ,..., T ) for each player i, constitutes the unique

symmetric subgame perfect equilibrium. Note that I have just derived a Nash equilibrium

in behavioral strategies that specify the probability of volunteering at each time period,

conditional on reaching it. It is not a strategic-form mixed strategy over the entire

strategy space. It has long been established (Kuhn, 1953) that under perfect recall—a

condition that I assume in the present study—they are equivalent (see, e.g., Fudenberg

and Tirole 1991).

To compute an equilibrium payoff induced by the SSPE, the probability

distribution of termination time of the game has to be computed. Let Pt denote the

probability of termination at time period t (t=0, 1, ..., T). Given the equilibrium strategy

6 This player can assure herself a payoff of LT by volunteering. 23 profile σ , the probability of termination at time period t, provided that the game did not

terminate before t, is computed from

n  H − L  n−1 1− 1( − σ ) n = 1−  t t  . t  −   H t Lt+1 

t−1 n Notice that the probability of termination before t is ∏ 1( −σ τ ) . Then, Pt is computed τ =0 from

t −1 = − − σ n −σ n ∈ Pt 1[ 1( t ) ]∏ 1( τ ) for t 2,1{ ..., T } τ =0

= − −σ n with P0 1 1( 0 ) . Denote by PNV the probability of termination with no volunteer.

Then,

T = −σ n PNV ∏ 1( τ ) . τ =0

The equilibrium payoff induced by the SSPE is computed from

T = ⋅ + ⋅ε E ∑ Pt Lt PNV . t=0

The SSPE σ yields the same equilibrium payoff of E to each of the n players. Note that

the SSPE is a Pareto deficient equilibrium.

1.3 Research Questions and Experimental Design

1.3.1 Research Questions

The payoff functions used in the experiment were

H = 20 (e − 1.0 t − e −6 ) + ε t = δ − 1.0 t − −6 + ε Lt 20 (e e ) 24 for t=0, 1, …, T, where 0< δ<1. Let n=3, T=30, and ε=1.00. Consider two experimental

conditions where δ=0.3 and δ=0.6, respectively, and their associated payoffs are

presented in Table 1.2. It is easily verified that these payoff functions satisfy all of the

four assumptions mentioned before. The upper panel of Figure 1.1 exhibits the

equilibrium probability distributions of termination time ( P0 , P1 ,..., P30 , PNV ) for the two

σ = σ 3 conditions, which are derived under the SSPE play ( { i }i=1 , where

σ = σ σ σ i ( 0 , 1 ,..., 30 ) ). The lower panel of Figure 1.1 displays their cumulative

distributions for Conditions δ=0.3 and δ=0.6.

The following research questions are formulated:

Q.1 Does δ=0.6 induce earlier volunteering than δ=0.3?

Given the payoff functions above, δ=0.3 (δ=0.6) implies a relatively high (low)

cost of volunteering at each point in time. Subjects may be inclined to wait longer and

abstain from volunteering if δ=0.3 than if δ=0.6. Figure 1.1 shows that under the SSPE play, a volunteer is likely to emerge considerably earlier when δ=0.6 than when δ=0.3.

Q.2 Can subjects coordinate their actions and achieve one of the socially optimal

subgame perfect equilibria in which a single subject volunteers at time period

t=0 ?

One may anticipate failure to coordinate the group’s actions on any of such asymmetric equilibria in which the volunteer receives a lower payoff than each of the n-1 non-volunteers. The reason is that this payoff asymmetry may render these equilibria difficult to realize because players have no clue of which equilibrium should be chosen.

In the absence of pre-play communication or sufficient opportunities of learning, players 25 may not succeed to coordinate their actions on one of such equilibria. Evidence supporting this prediction has already been documented by Bilodeau et al. (2004). They incorporated into their model heterogeneous characteristics of agents such as different costs, benefits, and lengths of life. These allow agents to single out a focal player who should volunteer at time period t=0. Their experimental results showed that even with

heterogeneous players this unique socially optimal subgame perfect equilibrium was

realized in only 133 out of 472 cases (28.2 percent). This suggests that in a symmetric

environment, in which homogenous subjects face a coordination problem of determining who should volunteer at time period t=0, it would be even more difficult to achieve one of the asymmetric equilibria.

As shown in the previous section, the dynamic volunteer’s dilemma game possesses a unique symmetric subgame perfect equilibrium in which each player fully randomizes over her actions at each time period. Therefore, I also want to answer the following question:

Q.3 Is the behavior of subjects consistent with symmetric subgame perfect equilibrium

play?

1.3.2 Experimental Design

Subjects

The subjects were 90 University of Arizona undergraduate and graduate students, who volunteered to participate in a group decision making experiment for payoff contingent on performance. Subjects interacted with one another in cohorts of 9, five cohorts (sessions) in Condition δ=0.3 and five other cohorts in Condition δ=0.6. A 26 between-group design was used in both conditions. Each session lasted about 90 minutes.

Excluding a $5 show-up bonus, the mean individual payoff for Conditions δ=0.3 and

δ=0.6 was $23.76 and $14.21, respectively.

Procedure

All the ten sessions were conducted in the same manner. The nine members of each cohort were randomly seated in a large computer laboratory and handed written instructions. The subjects were separated from one another by partitions that prevented any form of communication between them. In each session, the subjects participated in

50 identical iterations (called “ rounds ”) of the dynamic volunteer’s dilemma game

described in Section 1.2. In each round, the 9 members of each cohort were randomly and

anonymously assigned to 3 groups of 3 members each. They were instructed that the

group composition would change randomly from round to round. Consequently,

reputation building was not possible.

There were 31 periods in each round, including period 0. Each period lasted 1.5

seconds. Watching the game unfolding and the clock advancing, the subject’s task on

each period was to decide whether to stop the clock (= volunteering). 7 Stopping the clock was accomplished by simply moving the cursor outside of a red circle on the screen.8

Once the first player in a group stopped the clock, the game for this group (but not necessarily for the other two groups in the same cohort) was over. If none of the group

7 To prevent any social implications, the game was framed in neutral terms with “stopping the clock” substituted for “volunteering”. The subject who was the first to stop the clock was designated as the Stopper, and the others will be designated as the Non-stoppers.

8 This procedure was used to avoid the noise of clicking that might have conveyed information to members of the other two groups. 27 members stopped the clock, then the game lasted a total of 46.5 seconds (31×1.5). At the

beginning of each round, the clocks of the three groups in each session were

synchronized. All the 9 members of a cohort waited until the final period elapsed. The

subjects were explicitly instructed that stopping the clock was not mandatory.

Payoffs to the stopper, if any, and to the non-stoppers were determined by a table

in the instructions that the subjects could consult before and during the experiment. The

payoffs could also be read directly from a screen that depicted the two separate payoff

functions and, in addition, updated the payoffs on each period and listed them

numerically on the screen. Payoffs were stated in experimental currency called “points”.

At the end of the session, points were converted into US dollars at the rate 20

points=$1.00 in Condition δ=0.3 and 50 points=$1.00 in Condition δ=0.6.

Once the round was completed, the outcome was presented on the individual screens. This screen informed the subject whether or not she stopped the clock on that round, recorded the period on which the game terminated, and presented each subject with her payoff for the round. Information about the decisions and payoffs of the two other groups in the same round was not disclosed.

The experiment started with practice rounds intended to familiarize the subjects

with how to use the “mouse” to submit their decisions and get acquainted with the

operation of the clock. Subjects were not paid for these practice rounds that they could

repeat as many times as they wished. Once all the members of a cohort completed the

practice rounds (typically 2-4 rounds) by each independently pressing the button “I’m

ready to start playing” the session started. 28

1.4 Results

1.4.1 Four Major Findings

Finding 1: Effects of Costs of Volunteering

Figure 1.2 exhibits the observed cumulative relative frequency distributions of termination time for Conditions δ=0.3 and δ=0.6. Each of the two distributions is based on 750 observations (3 ×50 ×5 group by round by session). Comparison of these distributions (see statistical evidence below) shows that, as predicted by the SSPE

(bottom panel of Figure 1.1), subjects in Condition δ=0.6 stopped the clock, on average,

earlier than subjects in Condition δ=0.3.

The mean termination times across the five sessions in Conditions δ=0.3 and the five sessions in Condition δ=0.6 were 8.79 and 3.51, respectively. 9 The mean termination times in the five sessions in Condition δ=0.6 were 3.11, 5.11, 3.25, 2.51, and 3.57. All of these five means are smaller than 5.2. In contrast, the five mean stopping times in

Condition δ=0.3 were 9.56, 9.67, 8.09, 7.94, and 8.69. All of them exceed 7.9. Taking the session as the unit of analysis, the null hypothesis that the two conditions are identical in terms of their mean termination times was rejected ( p<0.01, using the Mann-Whitney U test). The results show that, on average, subjects stopped the clock in all sessions in

Condition δ=0.6 earlier than in all the five sessions in Condition δ=0.3.

Finding 2: Coordination Failure on a Socially Optimal Asymmetric Equilibrium

9 The case in which a round ends with no stopper was considered in the analysis by assigning the value of 31 to such a case. 29

Table 1.3 lists the observed frequency distributions of termination time for periods 0, 1, 2-4, … , 8-10, …, 17-30, and for the no stopping decision (i.e., no stopping or “NS”). The results are presented for each session separately and across sessions. Table

1.3 shows that the clock was stopped at time period t=0 113 times in Condition δ=0.3 and

233 times in Condition δ=0.6 (out of 750 observations in each condition). 10 These

frequencies include the cases in which multiple subjects stopped the clock at t=0, which

are not socially optimal. After removing these cases, the results show that subjects

successfully achieved one of the socially optimal asymmetric equilibria 111 times in

Condition δ=0.3 and 204 times in Condition δ=0.6. Compared with 14.8 percent of all the rounds on Condition δ=0.3 terminating with exactly one of the group members

volunteering, the corresponding percent in Condition δ=0.6 almost doubled to a value of

27.2. This latter value is practically identical to the one reported by Bilodeau et al. (2004).

It could be argued that since the clock used in the experiment moved rather

quickly, taking about 1.5 seconds per period, players wishing to stop the clock

immediately might have been late in responding due to slow reaction time. Therefore,

“stopping the clock immediately” is defined to mean stopping the clock at period t=0 or t=1. Under this definition, out of 150 decisions (50×3) in each session, there were 36, 20,

32, 38, and 51 games in Sessions 1 to 5, respectively, in Condition δ=0.3 in which only one subject stopped immediately. The corresponding frequencies in Condition δ=0.6 were 79, 45, 53, 75, and 61. Therefore, one of the socially optimal asymmetric subgame

10 In each condition a minority of the subjects stopped the clock at t=0 far more frequently. The maximum number of stopping the clock at t=0 by a single subject was 29 times (out of 50 rounds) in Condition δ=0.3 and 40 times in Condition δ=0.6. Eighty percent of the subjects stopped the clock at t=0 no more than four times in Condition δ=0.3 and no more than nine times in Condition δ=0.6. 30 perfect equilibria was almost achieved 177 times (23.6 percent) in Condition δ=0.3 and

313 times (41.7 percent) in Condition δ=0.6.

In spite of realizing the benefit of volunteering early, subjects may start the session by stopping the clock too late. Then, as they gain more experience with the task and learn more about the behavior and frequency of free riders, they may progressively advance their stopping times and eventually achieve one of the socially optimal subgame perfect equilibria. Murphy, Rapoport and Parco (2006a) reported such patterns of behavior in trust dilemmas that also evolve over real time. 11 No evidence is found in support of this hypothesis. Figure 1.3 displays the mean termination time computed across the three groups, m t, for rounds 1 through 50 ( t=1, …, 50). The results are

exhibited in ten panels by condition and by session within condition.

To investigate the (linear) association between the values of mt with the round number t, I computed the Spearman rank-order correlation coefficient, r, for each session in each condition (see linear regression lines in Figure 1.3). The coefficient values ranged between -0.28 (Session 4) and 0.20 (Session 2) in Condition δ=0.3 and between -0.07

(Session 3) and 0.28 (Session 5) in Condition δ=0.6. In both conditions, only the correlation coefficients of two cohorts (Session 4 in Condition δ=0.3 and Session 5 in

Condition δ=0.6) are statistically different from zero ( p<0.05), positive in one case and negative in the other. There is no evidence for convergence to the socially optimal subgame perfect equilibria over the 50 rounds of play.

11 In a sharp contrast to the present dynamic volunteer’s dilemma game, however, their game possesses the unique subgame perfect equilibrium in which all players should stop the clock at t=0. 31

Finding 3: Earlier Volunteering than Predicted by the SSPE

The dynamic volunteer’s dilemma game examined in the present experiment possesses (i) asymmetric subgame perfect equilibria in which a single player stops the clock immediately and (ii) a unique symmetric subgame perfect equilibrium (SSPE) in which each player randomizes over two actions, stop the clock or not, at each time period.

The latter equilibrium prescribes a positive probability to stopping the clock immediately

σ > at t=0, i.e., 0 0 , which implies that the game terminates at period t=0 with a positive

> probability, i.e., P0 0 (see the upper panel of Figure 1.1). Therefore, all the observed games that ended immediately at period t=0 support both the asymmetric and symmetric equilibria.

To test the symmetric equilibrium, I employ the definition of “stopping the clock immediately” that was introduced earlier, which means stopping the clock at either t=0 or t=1. The probability distribution of termination time is normalized by subtracting the

equilibrium probabilities of termination at periods t=0 and t=1 (i.e., P0 and P1 ). Then, the new equilibrium probability of termination at period t ( t=2, 3, …, T) is

~ = − − Pt Pt /( 1 P0 P1 ) and the new equilibrium probability of no volunteer is

~ = − − PNV PNV /( 1 P0 P1 ) . Figure 1.4 exhibits the new predicted and observed relative

frequency distributions of termination time ( t≥2) by session and across all sessions (upper

left corner) for Condition δ=0.3. Figure 1.5 exhibits the corresponding distribution for

Condition δ=0.6. Inspection of these figures shows similar group patterns for all five sessions in each condition. In every case, subjects stopped the clock, on average, earlier 32 than predicted. The Kolmogorov-Smirnov (K-S) one-sample test—a test of goodness of fit—was invoked to test the null hypothesis that the observed termination time distributions at or after t=2 are drawn from a population having the (normalized)

equilibrium distribution. The null hypothesis was rejected for each of the five sessions in

Condition δ=0.3 and for Sessions 3 and 4 in Condition δ=0.6.

Stopping the clock earlier than predicted had significant monetary implications for the subjects. As illustrated in Table 1.2, group payoff is maximized if one of the three members of the group stops the clock immediately as the clock starts ticking. For each condition separately, I tested the null hypothesis that the mean payoff is equal to the expected payoff under equilibrium play. Using the single-sample t-test, the hypothesis

was clearly rejected ( t=15.36 and t=4.83 for Conditions δ=0.3 and δ=0.6, respectively, p<0.01 in each case). The expected payoff per session under equilibrium play is $8.39 for

Condition δ=0.3 and $9.37 for Condition δ=0.6. On average, the subjects in Condition

δ=0.3 earned 2.83 times more than expected, and the subjects in Condition δ=0.6 earned

1.52 times more than expected. Deviations from equilibrium behavior in the direction of

early exit paid off handsomely.

Finding 4: Individual Differences in Volunteering

Recall that the experiment was designed in such a way that the computer software recorded only stopping decisions of the volunteers, at most one in each group and three in each round. 12 Therefore, non-volunteers were not given the opportunity to record their stopping decisions because each round terminated on the volunteer’s move. This implies

12 If none of group members stops the clock in a given round, then “No stopper” is recorded as the decision of the group in this round. 33 that even a few subjects who commit themselves to volunteering at early time periods might have caused deviations from the SSPE behavior. To determine whether deviations from the equilibrium solutions are due to a minority of “hard core” volunteers, I computed the expected number of stopping the clock in a session. Each subject either stops the clock or not on each of 50 adjacent rounds, independently of the previous outcome. Therefore, the total number of stopping decisions is binomial. The probability that a player becomes a volunteer in a round is computed from

σ + σ −σ 3 + σ − σ 3 + σ − σ 3 0 1 1( 0 ) 2 ∏ 1( τ ) ... 30 ∏ 1( τ ) . τ <2 τ <30

The probability for Condition δ=0.3 ( δ=0.6) is 0.31 (0.36). Then, for each condition, I computed the probability distribution of the number of stopping decisions under the

SSPE play. These distributions are approximately normal with expected number of 15.58 stopping decisions in Condition δ=0.3 and 17.80 in Condition δ=0.6. The corresponding standard deviations are 3.28 and 3.39. Then, I computed the central 99 percent intervals around the expected number of stopping decisions: [7, 25] and [9, 27] for Conditions

δ=0.3 and δ=0.6, respectively. These are displayed in Figure 1.6.

Figure 1.6 illustrates a major finding concerning individual differences in the

decision to be the first to volunteer. Approximately one half of the subjects (24 of 45 in

each of the two conditions) behaved in agreement with the SSPE. Of the remaining 21

subjects in Condition δ=0.3, 12 stopped the clock fewer times than expected, and 9 stopped it more times than expected. 13 The results for Condition δ=0.6 are exactly the

13 In Condition δ=0.3, five subjects never volunteered, and one was the first to volunteer on 49 of the 50 rounds. 34 same: 24 of the 45 subjects behaved in agreement with the SSPE, 12 stopped the clock significantly less frequently than expected, and 9 stopped the clock significantly more often than expected. In each condition, the observed distribution of number of stopping decisions has a considerably larger variance than predicted, testifying to the considerable heterogeneity of the subjects with respect to the critical decision in the present study, namely, if and at what time to volunteer. The results also show that, in each condition, the subjects who volunteered fewer times than expected stopped the clock, on average, later than those subjects who volunteered more frequently than predicted. 14

1.4.2 Other Findings

The decision whether to be the first to volunteer is most likely shaped by moral considerations and social conventions. If, in fact, this is the case, then under random assignment of subjects to groups in each of the experimental conditions the observed relative frequency distributions of individual number of stopping decisions should be the same for both conditions. In agreement with this prediction, the two-sample K-S test could not reject this null hypothesis ( D=0.133, p>0.1). 15 Figure 1.7 exhibits the two

observed cumulative relative frequency distributions of the individual number of stopping

decisions, which are seen to track each other rather closely.

14 Five of the 12 subjects in Condition δ=0.3 who volunteered fewer times than predicted never volunteered. The mean stopping time for the other 7 subjects in Condition δ=0.3 is 19.12, and the mean for the 9 subjects in this condition who volunteered more often than predicted is 5.82. The difference between these means is significant by the Mann-Whitney test ( p<0.05). The corresponding means for Condition δ=0.6 are 9.34 and 2.14. The difference between these two means is also significant by the Mann-Whitney U test (p<0.05).

15 D is the K-S test statistic. 35

Does it pay to be the first to volunteer? In both conditions of the experiment, the answer is positive if the volunteer’s dilemma game is played only once. If it is played for multiple rounds, even when the group composition is determined randomly on each round, then the answer is not as clear due to the possibility of strategic play by some of the subjects. For example, a subject may decide to be the first to volunteer on early rounds of the session in order to establish a social norm of early stopping in her cohort.

On later rounds, hiding behind the veil of anonymity, she may decide to free ride on the early stopping decisions of her cohort members. To answer this question, the individual payoff for the session was correlated with the individual number of exit decisions. The linear correlation values were negative and highly significant: -0.701 and -0.487 for

Conditions δ=0.3 and δ=0.6, respectively ( p<0.01 in each case). Figure 1.8 displays the

scatter plot and the linear regression line for each condition. In Condition δ=0.3, any increase by a single decision to volunteer decreased the individual payoff by 3.9 points.

The corresponding value for Condition δ=0.6 (2.6 points) is smaller, reflecting the

difference in the cost of volunteering between these two conditions.

1.5 Conclusion

Chapter 1 examines a class of n-person dynamic volunteer’s dilemma games which are formulated in an extensive form, with n symmetric players, discrete time, finite horizon, and complete information. In this class of dilemmas, only a single volunteer is required for the provision of a public good and the value of the good diminishes over time, i.e., the payoffs to all group members decrease over time. I have derived the 36 symmetric and asymmetric subgame perfect equilibria for the game and tested their implications in the controlled environment of the laboratory.

The experimental results reveal several findings. First, a lower cost of

volunteering (i.e., a higher value of δ) elicited earlier termination of the game. This

suggests that the lower the cost of volunteering, the earlier one or more individuals

emerge for the provision of a public benefit. Second, subjects struggled to achieve one of

the socially optimal asymmetric equilibria but they received a better outcome than what

they would have ended up with under SSPE play. Subjects largely failed to coordinate

their actions on one of the socially optimal asymmetric subgame perfect equilibria in

which only a single subject volunteers immediately at time period t=0. At the same time,

they volunteered, on average, earlier than predicted by the SSPE. These results persisted

in each of the two conditions, but more so in Condition δ=0.3. Third, subjects exhibited

heterogeneous propensities to volunteer. About 50 percent of the subjects in each

condition adhered to SSPE play in terms of the expected frequency of stopping decisions

(see Figure 1.6). The other 50 percent were divided almost equally between “free riders”

(left side of the lower bound of the central 99% interval), who stopped the clock

significantly fewer times than predicted by the SSPE, and “hard core volunteers” (right

side of the upper bound of the central 99% interval), who stopped the clock significantly

more often than predicted. “Hard core volunteers” deserve this name because they

persisted in stopping early despite a sharp decline in their earnings. “Free riders” greatly

benefited, as they always do, from the presence of the “hard core volunteers.” The

systematic deviation from the equilibrium distribution of stopping times is mostly due to 37 a substantial fraction of the “hard core volunteers” who opted to stop the clock in the first

3-5 time periods of the game.

Experimental studies of iterated public good games almost invariably show that the propensity to free ride increases with more experience in playing these games. This holds for most experiments on public good provision that implement the Voluntary

Contribution Mechanism (VCM, see, e.g., reviews of public good experiments by

Camerer, 2003 and Ledyard, 1995) as well as for the continuous-time trust-based dilemmas experiment reported by Murphy et al. (2006a). In contrast, the results of the iterated dynamic volunteer’s dilemma present no discernible effects of learning. At least two explanations for this finding suggest themselves. The first has to do with the mechanism used to elicit responses. The decision method —the one used in the present

study—records the stopping time of the volunteer; non-volunteers are not provided with

the option of recording their intended stopping times because the game terminates on the

volunteer’s action. Consequently, players never learn about the intended stopping times of non-volunteers. The strategy method provides players with information about the propensity to volunteer (or not volunteer) of all the group members. In a second study on continuous-time trust-based dilemmas, Murphy et al. (2006b) reported evidence that when credible signaling is possible the decline in cooperation over iterations of the stage game observed by Murphy et al (2006a) under the decision method did not occur. A comparison of the decision and strategy methods in the present dynamic volunteer’s dilemma game is called for to assess the impact of credible signaling, or lack of it, in the iterated dynamic volunteer’s dilemma game. 38

A second possible explanation is that, in contrast to other public good mechanisms (e.g., the VCM), a single player in the dynamic volunteer’s dilemma can

ensure the provision of the good. There is no need for a player intending to volunteer to

depend on the actions of others. This hypothesis may be experimentally tested by

extending the game in the present study to public good games where m ( m>1) volunteers are required for the public good provision such that the value of the good for the m volunteers and the possibly different value for the n-m non-volunteers are determined by the timing of the last player to act voluntarily. 39

CHAPTER 2: UNIQUE BID AUCTIONS

2.1 Introduction

This chapter considers a unique bid auction , a new type of that has

rapidly been gaining widespread popularity particularly in Great Britain, Sweden,

Germany, Australia, and the US. The new feature of this type of auction, that sharply

differentiates it from previous auctions, is the uniqueness of the winning bid. The

common rule in classical auctions (e.g., first-price sealed-bid auction) is to break ties with

some lottery mechanism. In contrast, in this new type of auction ties are not considered.

Rather, a necessary condition for winning the auction is for the bid to be unique. This

type of auction is called the unique bid auction (UBA).

In a typical UBA an auctioneer wishes to sell a particular good. Bids are integers

in local currency such as US dollars. The auctioneer specifies the maximum amount of

bid, which is usually set much lower than the value of the good, and the maximum

number of bids required to close the auction. The auctioneer does not restrict the bidders

to submit a single bid. For each bid, an entry fee is charged. In the lowest unique bid auction (LUBA), the winner is the bidder submitting the lowest unique bid. In the highest

unique bid auction, the winner is the bidder who submits the highest unique bid. In the absence of a lowest (highest) unique bid, the auction terminates with no winner. If there is a winner, then she is awarded the right to purchase the good at her bid (i.e., at the winning bid). To ensure profit, the auctioneer closes the auction only after the minimum 40 number of bids has been placed.16 The attraction for the bidder is that she may acquire the

good at well below its true value. For example, in a typical HUBA, a car valued at

$20,000 might be offered to bidders at maximum bid price of $100. If the auctioneer

charges $10 entry fee per bid, then he would need at least 2,000 bids to cover the cost of

the car (excluding costs of conducting the auction and processing the bids). If the auction

closes with a unique highest bid of $80, then the winner would purchase the car for 0.4%

of its value and earn a profit of $20,000-(80+10). As reported by the Boston Globe

(02/04/2006), one of the website specializing in LUBAs recently sold a laptop for $19, a living room suite for $43, and a Hummer SUV for less than $700. Another UBA auction site, Auction4acause.com, that specializes in the HUBA sold Apple iPhone 8GB (retail price $399) for $6.82, Home Depot $500 Gift Card for $8.37, and MacBook Air (retail price $1799.99) for $10.27.

In this chapter, generic UBAs are devised that differ from UBAs in the real world in several details but that still capture the essential feature of UBAs, namely, the uniqueness of the winning bid. In the generic UBAs, each bidder is asked to choose a single integer (i.e., bid) from a common, pre-specified set of integers (i.e., bids) B = {b,b +1,..., b}, where b and b are the minimum and maximum amounts of bid,

respectively. The winner is the bidder who chooses the lowest (highest) unique integer in

the generic LUBA (HUBA). If there is no unique bid, then the auction ends with no

16 At some UBA sites, several conditions must be satisfied simultaneously to close an auction. For example, at Auction4acause.com, an auction will remain open until either (i) the maximum number of bids allocated for the auction is reached or (ii) the auction reaches a pre-specified maturation day and has received the minimum number of bids required to close. If the minimum number of bids has not been reached, the auction will be extended until the minimum number of bids has been reached. 41 winner. A major feature of the generic UBAs, that sharply differentiates from typical

UBAs, is how to determine the payoff of the winner; if there is a winner, her payoff is the integer she picked. All the other bidders receive nothing. To achieve tractability, it is also assumed that the number of bidders n is fixed and commonly known before the auction commences. To simplify the procedure, no entry fee is charged. For more details of the generic UBAs, see Section 2.3.

The most similar studies to the present study are the first-price sealed-bid auctions

conducted by Dufwenberg and Gneezy (2000, 2002) and later by Gneezy (2005). For

example, in the experiment by Gneezy each of two bidders simultaneously selects an

integer from the set B={1, 2, … , b }. The winner choosing the lowest bid is paid a dollar amount times the integer she bids whereas the other player gets 0. The main difference from these earlier studies is that in the UBAs the winning bid must be unique. Therefore, a UBA is not guaranteed to end with a winner. In contrast, if there is a tie in the auctions studied Dufwenberg and Gneezy and by Gneezy, then the earnings are equally split between the bidders. 17

2.2 Previous Literature

In spite of a widespread popularity of the UBA around the world, both theoretical and experimental studies are scarce. To the best of my knowledge, there are three analyses of the equilibria of UBAs and their variant formats. The first is by Raviv and

Virag (2007), who have offered an analysis under additional assumptions and, in addition,

17 The uniqueness of the winning bid is not a new feature in the literature of auction. For example, Rapoport and Amaldoss (2000, 2004) analyze all-pay auctions in which the strategy space is discrete and ties are counted as losses. 42 have used empirical data provided to them by a UBA site to test their solution. Their analysis is based on several simplifying assumptions, namely, (i) the maximum amount of bid is set far below the value of an auctioned item; (ii) focus only on probability of winning (or a tie) rather than expected value, and (iii) repeat the auction (or return entry fee) in case there is no winner. Under these assumptions they provided the exact solution for a special case of the LUBA with constant net payoff of the value of the item minus the maximum amount of bid no matter what the winning bid. The constant payoff greatly simplifies their problem. In Section 2.3 I construct a solution to a different and larger class of auctions without making these assumptions.

Östling et al. (2007) reported a second analysis of the equilibrium solution for the

LUBA.18 They report an exact solution to a related case, one that I do not consider here, where the number of players n is uncertain. Using the theory of Poisson games developed by Myerson (1998, 2000), they assume that n is a random variable that has a commonly known Poisson distribution. Additionally, in an appendix to their paper they provide a solution for the more difficult case where n is fixed and known. However, their numerical

results are restricted to the special case n= b <8. Using a very different approach to compute the mixed-strategy equilibrium solution, Section 2.2 describes an algorithm that is not subject to their restrictions.

Eichberger and Vinogradov (2007) suggested analytical solutions for the lowest

unique bid auction in which the number of potential bidders is fixed and commonly

18 More precisely, their game is a variant of the LUBA called the LUPI (lowest unique positive integer) game. In this game, unlike typical LUBAs, the winner does not need to pay her winning bid to the auctioneer.

43 known.19 The major differences from the first two studies are that the outside option (i.e., not entering the auction) is available in the strategy space and, more importantly, each bidder is allowed to submit multiple bids. The former assumption implies that whether to enter the auction depends partly on a value of entry fee per bid. Therefore, the bidders’ entry decisions are determined endogenously, and the number of active bidders (i.e., bidders who decided to enter the auction) is not necessarily the same as the number of potential bidders. Although the latter assumption makes their model closer to the lowest unique bid auctions in the real world, at the same time it makes it very difficult to derive the equilibrium solution. They have derived the equilibrium solution for a special case and left a characterization of the general solution of the game for future research.

Common to all of the three studies is an attempt to test their models with field data.20 Critical to the models of Raviv and Virag and of Östling et al., as to the present model, is the assumption that each player submits a single bid. Therefore, the number of bids is the same as the number of bidders. In contrast, unique bid auctions conducted on the Internet do not restrict the number of bids per entrant. Clearly, if a bidder submits multiple bids, the bids will necessarily differ from one another. Consequently, multiple bids by the same player cannot be considered independent, and field studies using

Internet data are therefore inappropriate for testing models postulating single bids. A

19 They call this type of auction as the least-unmatched price auction (LUPA).

20 Östling et al. also subjected their model to laboratory testing. Their laboratory experiment has two important issues. First, theoretical speaking, the assumption that the number of players has a Poisson distribution cannot be replicated in the laboratory because its support is not bounded above. Therefore, no matter how much they scale down their game, there is always a positive probability that the number of players exceeds the number of available seats in their laboratory. Second, Östling et al. did not inform their subjects of the process by which the number of players in each round was determined. In contrast, their model assumes a commonly known Poisson distribution for the number of players. 44 second equally serious problem with using field data is the possibility of collusion between bidders. The model of Eichberger and Vinogradov explicitly allows dependency of bids within a bidder while it still requires independence between bidders. However, it is impossible to exclude cases of collusion between bidders from field data.21 Therefore,

the assumption of independence between the bidders’ decisions may not be guaranteed in

the field data.

The rest of the chapter is organized as follows. Section 2.3 presents and discusses the equilibrium solutions to LUBA and HUBA. It describes a computational procedure for constructing symmetric mixed-strategy equilibrium solutions to these two auctions.

Section 2.4 describes alternative procedures for implementing the UBA. Section 2.5 describes two experiments designed to study the predictive power of the equilibrium solutions for the model proposed in Section 2.3 and identify deviations from equilibrium, if any. I focus on bid patterns with the same n for both the LUBA and HUBA. I have chosen the experimental method because it can implement the assumptions of the model with precision. My purpose in both experiments is not to mimic unique bid auctions played on the Internet. Rather, the experiments are designed to isolate the effect of uniqueness of the winning bid and study it experimentally within the framework of the model proposed in Section 2.3. Section 2.6 analyzes and discusses results of the two experiments. Section 2.7 concludes the chapter.

2.3 Equilibrium Solutions

21 This is not just a theoretical objection, as collusion between bidders has been reported in the field data examined by Östling et al. Eichberger and Vinogradov also reported that their field data might have included cases that a single bidder submits multiple sets of bids using different identities. 45

I now formally present the LUBA and HUBA. Note that they differ from LUBAs and HUBAs in the real world in several details.

2.3.1 LUBA and HUBA

There are n bidders, n>2. Each bidder chooses a single bid, which is an element in the common strategy set B = {b,b +1,..., b} . Bids are made simultaneously and anonymously. In the lowest unique bid auction (LUBA), the bidder making the lowest unique bid is the winner and she is paid the value of her bid, b ( b∈B). In the highest

unique bid auction (HUBA), the bidder making the highest unique bid is the winner. If

there is no unique bid, then the auction ends with no winner. To simplify the analysis, no

entry fee is charged.22

2.3.2 Asymmetric Pure-strategy Equilibria

Both the LUBA and HUBA have multiple asymmetric pure-strategy equilibria.

With a single bid per bidder, denote by ( b1, b2, … , bn) a vector of n bids where the n bids

≤ ≤ ≤ are arranged in an ascending order (i.e., b1 b2 ... bn ). Then, for a vector to be an

= = equilibrium, it must have b1 b in the LUBA and bn b in the HUBA. There may be

other conditions, which are not described here. Below I illustrate the equilibria with

several examples.

Example Assume that n=7 and B={1, 2, … , 7}. Then the following vectors of bids are in equilibrium in LUBA (winning bids are represented in bold): ( 1, 2, 2, 3, 6, 6, 7), (1, 1, 2,

3, 3, 4, 6), (1, 1, 1, 2, 3, 4, 4), (1, 1, 1, 1, 2, 3, 5), (1, 1, 1, 1, 1, 2, 3), (1, 1, 2, 2, 3, 4, 6), (1,

22 If there is an entry fee then the auctioneer would need to refund the fee or allow bidders to repeat the auction in case of a tie. An auction under those assumptions will be solved as part of a future research agenda. 46

1, 2, 2, 2, 3, 4), (1, 1, 1, 2, 2, 3, 4). In HUBA, the following vectors of bids are in equilibrium: (1, 2, 2, 3, 6, 6, 7), (1, 1, 5, 6, 6, 7, 7), (1, 1, 1, 6, 7, 7, 7), ( 5, 6, 6, 7, 7, 7, 7),

(4, 5, 5, 6, 6, 7, 7), ( 6, 7, 7, 7, 7, 7, 7), (1, 2, 3, 4, 5, 6, 7), (2, 2, 5, 5, 6, 7, 7).

2.3.3 Symmetric Mixed-strategy Equilibrium

Because the bidders are assumed to be identical, it is the natural choice to focus on symmetric mixed-strategy equilibria (SMSE). This section describes a procedure that uses non-stationary Markov chains to numerically compute the SMSE only for the LUBA.

The SMSE for the HUBA is computed in a similar way.

Denote by p a (symmetric) mixed strategy of a bidder. That is,

p = ( p , p ,..., p ) , where p is the probability that a bidder who uses the mixed b b+1 b b

strategy p bids b∈B. Let one of the n bidders be a designated bidder. The expected payoff for this bidder for each bid b is computed and used to solve for the probabilities

− pb , pb+1 ,..., pb . Note that each of the n 1 others, as well as the designated bidder,

independently chooses the bids according to the probabilities pb , pb+1 ,..., pb .

To construct the equilibrium probabilities, a non-stationary Markov chain is used.

Suppose that the designated bidder bids an arbitrary bid b ∈ B . To determine whether the

bid b is the winning bid, the only relevant bids of the n-1 others are the ones equal to or lower than b. The game can be viewed as an auction in which (i) an auctioneer starts at the lowest bid b and keep raising bid value until b , (ii) the winner is the unique bidder who is the first to bid (e.g., raising hand), (iii) each bidder cannot observe the other bidders’ timings of making a bid, and (iv) once the highest bid b is reached, the auction 47 closes and the winner is announced. Therefore, the game occurs in time where time is equal to bid value. Making a bid anywhere lower than b is referred to as bidding before

time b.

As time progresses (i.e., bid value increases), there are fewer other players who have not bid yet. Thus, the number of the other bidders remaining, from 0 to n-1, forms a stochastic process. Also, if one of the others bids at any bid before b, the designated bidder loses (no win, or NW ). Therefore, the bidding process can be modeled over time

(i.e., bid value) as a non-stationary Markov chain. The state space of the process is

= − ∈ = + S 1,0{ ,..., n ,1 NW }. Let sb S be a state at bid b. There are S n 1 possible states

for any time (i.e. any bid value).

Denote by P(b − )1 a 1× (n + )1 initial vector, whose elements are probabilities over possible states before the game starts. Before the game starts, the probability that

= − − = s0 n 1 is 1, i.e., Pn−1 (b )1 1. Therefore,

− = − − − − = P(b )1 [P0 (b )1 P1 (b )1 ... Pn−1 (b )1 PNW (b ])1 0[ 0 ... 1 ]0 .

For b ≥ b , define a (n + )1 × (n + )1 transition matrix P(b − ,1 b ) with elements − = = = Px,y (b ,1 b) P(sb y | sb−1 x) as follows:

 1 0 0 ... 0 0   −   0 1 hb 0 ... 0 hb   2 − 2 −  hb 0 1( hb ) ... 2hb 1( hb ) P(b − ,1 b) =   ,  ......  n −1  n−1   2 − n−3 − n−1 − − n−2  hb 0  hb 1( hb ) ... 1( hb ) (n )1 hb 1( hb )  2       0 0 0 ... 0 1  48

p p = b = b 23 where hb . hb is the probability of bidding b if a bidder p + ... + p 1− pβ b b ∑β

from a state at b-1 to a state at b. For example, the probability of a transition from

= − = sb−1 n 1 to sb 2 is

n −1 − =   2 − n−3 Pn− 2,1 (b ,1 b)  hb 1( hb ) .  2 

Then, the row vector that constitutes the probability distribution over state vectors at b is

obtained by the following matrix multiplication:

P(b) = P(b − )1 P(b − ,1 b)P(b,b +1)... P(b − ,1 b) .

Recall that the designated bidder who bids b will win provided that there was no unique

bidder who bid before (i.e., lower than) b. This probability can be extracted from the row

vector P(b − )1 by finding the probability of the state whose second component is 0. For

− − example, P[u ]0 (b )1 is the probability that u bidders (of the n 1 bidders) bid higher than b − 1 and there was no unique bidder who bid less than or equal to b − 1.

= Suppose that the designated bidder bids b and sb−1 u . Then, the designated bidder becomes the winner only if none of the u bidders bids b. The probability of

= − u − winning by bidding b when sb−1 u is 1( hb ) Pu (b )1 . Hence, when each of the other

bidders uses the mixed strategy p, the designated bidder’s expected payoff of bidding b is

23 To construct a transition matrix, all possible transitions from one state to another must be considered. However, it is impossible for some transitions to take place. The probability of such a transition is 0. 49

n−1 = − u − E(b, p) b∑ 1( hb ) Pu (b )1 . u=0

To compute the SMSE, recall that the behavior of bidders who bid before b does

not affect the payoffs of those who bid at or after b. Thus, the expected payoff of bidding

b is a function of the equilibrium probabilities only through pb , pb+1 ,..., pb . To determine

pb , the values of pb , pb+1 ,..., pb−1 are fixed and pb is varied. Since pb , pb+1 ,..., pb−1 are

fixed, the designated bidder’s expected payoff of bidding b is rewritten as

n−1 = − u − E(b, pb ) b∑ 1( hb ) Pu (b )1 . u=0

Notice that E(b, p ) is continuous on 1,0[ − pβ ] and strictly decreasing in p b ∑β

because the probability of a tie (i.e., losing) increases as pb increases. This fact will be

used to numerically search for pb .

To find the equilibrium probabilities, the following general result is used.

≤ Suppose that E is an equilibrium expected payoff of the game. Then, (a) E(b, pb ) E for

= > = < 24 any bid b, (b) E(b, pb ) E if pb 0 , and (c) pb 0 if E(b, pb ) E . Since the true

value of E is unknown, the algorithm must start with an estimate of E.25 For a given value

of E, the associated probabilities pb ,...,pb are constructed sequentially through the

following algorithm that starts at b and continues through b .

Step 1 Set a value of E.

24 For proof of this general result, see sections 3.1.5, 3.4.2, and 3.4.3 in Vorob’ev (1977).

25 If the value of E is set too high, the sum of the equilibrium probabilities may be much smaller than 1. On the other hand, if the value of E is set too low, no equilibrium solution may exist. 50

Step 2 Consider bid b. Given pb ,..., pb−1 , compute E(b )0, .

≤ = < a. If E(b )0, E , then keep pb 0 . If b b , increase b by 1 unit and repeat

Step 2. Otherwise, go to Step 3.

b. If E(b )0, > E , evaluate E(b 1, − pβ ) , where 1− pβ is the ∑β

maximum feasible value of pb .

i. If E(b 1, − pβ ) ≤ E , then, there exists p ( 0< p ≤ 1− pβ ) ∑β

= such that E(b, pb ) E since E(b, pb ) is continuous on

1,0[ − pβ ] and strictly decreasing in p . If b < b , then increase b ∑β

by 1 unit and repeat Step 2. Otherwise, go to Step 3.

ii. If E(b 1, − pβ ) > E , then, the game has no solution for the given ∑β

value of E. Terminate the algorithm. Go to Step 1, increase E, and

repeat the algorithm.

b Step 3 If 1− p > ε , where ε specifies how close is the sum of the probabilities to ∑b=b b

1, then, go to Step 1, decrease E, and repeat the algorithm. Otherwise, pb ,...,pb

are the equilibrium probabilities.

To illustrate the equilibrium solutions to the LUBA and HUBA, consider the following examples of the two auctions. For each auction, there is a group of n=50 bidders, and each of them selects a single integer from a common strategy space B={1, 2,

… , 50}. Figure 2.1 exhibits the SMSE solutions to the two auctions, which are clearly not the mirror images of each other. First, in the equilibrium solution to the LUBA (upper 51 panel), each of the bids in B is chosen with a positive probability. In contrast, in the equilibrium solution to the HUBA (lower panel) the bids 1 through 32 are never chosen at all. Second, in the equilibrium solution to the LUBA, the probabilities pb first increase

and then decrease as the bid b increases, whereas in the solution to the HUBA they increase monotonically in b. My computations show that the expected payoffs in the

LUBA and HUBA are 0.107 and 0.835, respectively. These results suggest that in testing the equilibrium solutions experimentally, these two types of UBAs ought to be considered separately.

2.4 Alternative Implementations

It is well known that the same auction may be implemented (“framed”) in alternative ways that, in theory, are strategically equivalent (Krishna, 2002). For example, the first-price auction may be implemented in several ways, as a sealed-bid auction in which bids are placed simultaneously, or as a . Bayesian Nash equilibrium theory suggests that these forms are isomorphic. In a similar way, each of the UBAs may be implemented in alternative ways. In what I call here “sequential implementation,” the market operates like a Dutch auction with the auctioneer calling b and then lowering the

price of the good in discrete steps (minimum bid increment is normalized to 1) until

reaching b . The major difference from the classical Dutch auction is that the n bids are not revealed until the clock reaches its minimum price b . Under this implementation, the

only difference between the LUBA and HUBA is whether the lowest unique bid or

highest unique bid, respectively, wins the auction. Turocy et al. (2007) introduced the 52

“silent” Dutch auction and studied it experimentally. 26 One of their major findings is that framing matters: market values in the “silent” Dutch implementation generally fell between those generated by the classical Dutch auction and the ones generated by the first-price sealed-bid auction. Another finding is that the two classical and “silent” Dutch auctions exhibited more heterogeneity across cohorts of subjects in the level of prices and in the way prices changed over time in comparison to the sealed-bid implementation.

For another implementation, I next show that an explicit prize V is not necessary in order to formulate and conduct UBAs. For each UBA with a prize V there exists a strategically equivalent auction with no exogenous prize in which the winner is awarded the value of her bid. To show this, consider a HUBA with a common strategy space

B = {b,b +1,..., b} and prize V. I refer to this class of auctions as auctions with exogenous prizes. It is easy to see that this auction is strategically equivalent to a LUBA with ~ strategy space B = {V − b,V − b +1,..., V − b} and instead of an exogenous prize V , the player choosing the lowest unique bid is awarded the value of her bid. I refer to this class of auctions as LUBAs with endogenous prizes. Similarly, consider a LUBA with exogenous prize V and strategy space B = {b,b +1,..., b}. It is strategically equivalent to a

~ HUBA with endogenous prize and strategy space B = {V − b,V − b +1,..., V − b} in which

the player choosing the highest unique bid is awarded the value of her bid. I refer to this

class of auctions as HUBAs with endogenous prizes. Although exogenous LUBAs

(HUBAs) are strategically equivalent to endogenous HUBAs (LUBAs) (with different

26 The “silent” Dutch implementation refers to the one in which a clock counts down as in the Dutch implementation, but in which the outcome of the auction is not revealed until the clock reaches the lowest price (Tulocy et al., 2007). 53 strategy space), it is an empirical question whether they yield the same pattern of bidding behavior. The focus in this chapter is on LUBAs and HUBAs with endogenous prizes.

Hereafter, all references to LUBAs and HUBAs are these types.

2.5 Experimental Design

The purpose of this experiment was to test the predictive power of the SMSE in two UBAs that only differed from each other in the rule determining the winner: lowest unique bid in Condition LUBA and highest unique bid in Condition HUBA. In both conditions, a group of n=10 subjects participated in each auction and a common strategy space was B={1, 2, … , 25}.

Figure 2.2 exhibits the SMSE solutions for the two auctions, the LUBA (upper panel) and HUBA (lower panel). Similarly to Figure 2.1, Figure 2.2 shows that the equilibrium solutions to the LUBA and HUBA are not mirror images. A heuristic explanation for this is as follows. In placing her bid, a bidder in both auctions is driven by two motives, namely, to maximize the probability of choosing a winning bid and maximize her expected payoff. Both motives operate in the same direction in the HUBA: to win the auction, the bidder wishes to place a high bid. The higher the bid she places, the higher her payoff if she wins the auction. On the other hand, these two motives operate in opposite directions in the LUBA: to win the auction, the bidder wishes to place a low bid. However, the higher the bid she places, the higher her payoff if she wins the auction. The same two forces operate in the auction studied by Gneezy (2005), where in the case of tie the winning bidder is determined by lottery. In the auctions that he conducted, the equilibrium solution is always in pure strategies. 54

Subjects

One hundred University of Arizona undergraduate and graduate students

participated in this experiment. They all volunteered to take part in a group decision

making experiment for payoff contingent on performance. Male and female subjects

participated in nearly equal proportions. Subjects were run in groups of 10, five groups in

Condition LUBA and five other groups in Condition HUBA. None of the subjects was

allowed to participate in another session. A session lasted about 75 minutes. Including a

$5.00 show-up bonus, the mean payoff in Conditions LUBA and HUBA was $17.21 and

$17.02, respectively.

Procedure

All the sessions were conducted in the same way. The five group members were

randomly seated in a large computer laboratory and handed written instructions. No

communication between the subjects was possible. The subjects were instructed that the

purpose of the experiment was to study “a new type of auction that has become quite

popular in the Internet.” Implementing a between-subject design, each session included

60 identical rounds (auctions) that were structured as follows. On each round, the subject

was asked to enter a bid by choosing one of the integers in the common strategy set B.

Bids were entered anonymously. The subjects were instructed that the winner would be

the one entering the lowest (highest) unique bid in Condition LUBA (HUBA). A winner

would earn the value of her bid, whereas non-winners would earn nothing. No

participation fee was charged. 55

Three screens were presented on each round. The Decision Screen listed the possible bids in B and asked each subject to choose and then enter one of them. The

Results Screen presented all the five bids for the round, identified the winning bid (if at all), and recorded the subject’s payoff for the round. Individual bidders were not identified. At any time, the subject could access a History Screen, which displayed the round number, all her previous bids from round 1 to the present round, and all the values of the previous winning bids. The experiment was self-paced.

At the end of the session, the subjects were paid in cash their cumulative earnings.

In equilibrium, including the $5 show-up bonus, the expected earning in Conditions

LUBA and HUBA were $16.09 and $16.04, respectively. To equalize mean earnings across the two conditions, the exchange rate was set at $1.00 per 1.5 points in Condition

LUBA and 11 points in Condition HUBA.

2.6 Results

2.6.1 Aggregate/Group Level Results

Bids

Figures 2.2 exhibits side by side the observed aggregate (across sessions) relative frequency distributions of bids and the SMSE probability distributions of bids for rounds

1-30 (upper panel) and 31-60 (lower panel) of Conditions LUBA. The figure shows that the SMSE solution describes the aggregate results for Condition LUBA remarkably well.

There are no systematic discrepancies between observed and predicted probabilities across the entire range of bids from 1 through 25 in both the first and last 30 rounds. The only possible exception is bid 25 that was chosen about four times as frequently as 56 expected (compare 1.6 in the first 30 rounds and 1.7 in the last 30 rounds to 0.4 percent).

This discrepancy is mostly due to a few subjects who chose the maximum bid a disproportionally large number of times.

Figure 2.3 displays the same distributions for Condition HUBA. In a sharp contrast to Condition LUBA, systematic discrepancies between observed and predicted probabilities of bids were found for Condition HUBA. In equilibrium, bids equal to or smaller than 18 should never be placed and bid 19 should be chosen only 0.1 percent of the time. In contrast, bids 1-18 were actually chosen on 5.1 percent of the first 30 rounds and 1.6 percent of the last 30 rounds, and bid 19 was chosen on 4.1 percent of the first 30 rounds and 3.2 percent of the last 30 rounds. Figure 2.3 shows that, on the aggregate, bid values 21-25 were chosen less than predicted and bids 1-20 more than predicted. 27

As a formal statistical analysis, the one-sample Kolmogorov-Smirnov (K-S) test was invoked to test the null hypothesis of SMSE play on the aggregate level for the first and last 30 rounds, separately ( df =1500). Under this null hypothesis, both subjects within a group and rounds within a subject are independent. Given the excessively large number of degrees of freedom, it is not surprising that the null hypothesis was rejected on the aggregate level for both conditions (D=0.054 for rounds 1-30 of Condition LUBA,

D=0.046 for rounds 31-60 of Condition LUBA, D=0.114 for rounds 1-30 of Condition

HUBA, D =0.076 for rounds 31-60 of Condition HUBA, p<0.01 for each). 28

27 Bid 25 was chosen more frequently than predicted in the first 30 rounds.

28 D is the K-S test statistic. 57

Tables 2.1 and 2.2 display the observed relative frequency distributions of bids and equilibrium probabilities on the group (session) and aggregate levels for rounds 1-30 and 31-60 of Conditions LUBA and HUBA, respectively. On the group level, the equilibrium solution accounted well for the relative frequency distributions of bids for both the first and last 30 rounds of Condition LUBA. In contrast, the same pattern of

“stretching” the bids was observed in each of the five groups in Condition HUBA. The

K-S test was used to test the null hypothesis of SMSE play on the group level for the first and last 30 rounds, separately (df =300). In the first 30 rounds, the null hypothesis was not

rejected for three of the five groups (60%) in Condition LUBA and one of the five groups

(20%) in Condition HUBA ( p>0.05 for each condition). In the last 30 rounds, however,

the null hypothesis was not rejected for three of the five groups (60%) in both Conditions

LUBA and HUBA ( p>0.05 for each condition). Table 2.3 shows that although the

observed pattern of “stretching” the bids in the first 30 rounds did not completely

disappear in the last 30 rounds of each of the five groups, the observed distributions of

bids were more skewed to the left (i.e., higher bids) in the last 30 rounds than in the first

30 rounds. This trend is in the direction of the SMSE.

2.6.2 Individual Level Results

In Condition LUBA, the equilibrium solution accounted very well for the bidding

behavior on the aggregate and group levels. Although the systematic deviations from the

equilibrium solution on both the aggregate and group levels were observed in Condition

HUBA, the bidding behavior moved closer to the equilibrium solution in the last 30

rounds than the first 30 rounds of the HUBA (see Figure 2.3 and Table 2.2). These results 58 suggest two hypotheses. First, a majority of subjects may have independently randomized their bids according to the SMSE. The behavior on the aggregate and group levels could be an artifact of aggregation of the behavior of all, or, a majority of subjects who played the SMSE. Second, in both conditions, more subjects may have followed the SMSE in the last 30 rounds than in the first 30 rounds. In what follows, however, no evidence supporting these hypotheses will be discovered.

As before, a total of 60 rounds was divided into two blocks of 30 rounds and then the K-S test was invoked to test the null hypothesis of SMSE play on the individual level for each block ( df =30). Under this hypothesis, subjects are independent of one another as well as are 30 iterations of the same auction for each subject. In other words, on each round subjects independently randomize their bids in the strategy set B according to the equilibrium probabilities. Tables 2.3 and 2.4 summarize the K-S test results for individual subjects of Conditions LUBA and HUBA, respectively (“R” stands for “Reject the null hypothesis of SMSE play” and “FR” for “Fail to reject the null hypothesis”). In the first

30 rounds, this null hypothesis could not be rejected for 29 of the 50 subjects (58%) in

Condition LUBA and 23 of the 50 subjects (46%) in Condition HUBA ( p>0.05 for each condition). In the last 30 rounds, in contrast, the null hypothesis could not be rejected for

22 of the 50 subjects (44%) in Condition LUBA and 20 of the 50 subjects (40%) in

Condition HUBA ( p>0.05 for each condition).

Figures 2.4 and 2.5 display the observed bid frequency distributions of the 50 subjects in the first and last 30 rounds of Condition LUBA, respectively. These figures show a wide variety of individual bidding patterns that defy a simple classification. Most 59 of them are skewed to the right (i.e., in the direction of larger bids), as predicted by the equilibrium solution. Some are approximately uniform (e.g., subject 8 of Session 1 and subject 3 of Session 3 in the first 30 rounds, subject 10 of Session 3 in the last 30 rounds).

Some are uni-modal (e.g., subject 7 of Session 1 over the 60 rounds, subject 9 of Session

4 in the last 30 rounds). Figure 2.6 and 2.7 organize the observed frequencies of bid for the 50 subjects in the first and last 30 rounds of Condition HUBA. Similarly, subjects in

Condition HUBA demonstrate diverse bidding patterns. Common to both conditions is that although the bidding behavior of a minority of the 50 subjects is characterized well by the SMSE, when combined across subjects they yield the aggregate relative frequency distributions shown in Figures 2.2 and 2.3, which are accounted well by the SMSE.

Caution should be exercised to determine whether the subjects for whom the null hypothesis was not rejected by the K-S test have in fact followed SMSE play. Recall that

SMSE play calls for subjects to independently randomize their bids on each round according to the equilibrium probabilities. This implies that if subjects repeatedly play the same auction, on each round each subject independently and stochastically decides whether to choose a different bid from her bid in previous round. Thus, subjects who actually follow the SMSE must not only

(A) generate their bid distributions that closely match with the predicted

distribution, which can be tested by the K-S test, but also

(B) switch their bids as frequently as predicted under SMSE play. 60

For example, the subject who has a bid distribution that resembles the predicted distribution very closely may not play the SMSE because she can still produce such a bid distribution by switching her bids considerably fewer times than predicted under SMSE play over iterations of the same auction. This subject may meet Requirement (A) but fail to satisfy Requirement (B).

To investigate subjects’ switching behavior, and more importantly, to test whether

the subjects for whom the null hypothesis of SMSE play was not rejected by the K-S test

actually satisfied Requirement (B), I computed for each subject separately the number of

switches in bids in Tables 2.5 (Condition LUBA) and 2.6 (Condition HUBA). Denote by

w the number of switches (0 ≤w≤29). A switch occurs if a subject bids b’∈B on round t

and b’’ ∈B on round t+1 ( t=1, 2, … , 29 in the first 30 rounds and t=31, 32, ... , 59 in the last 30 rounds) and b’≠b’’. Then, I computed how many switches (out of 29

opportunities) would be observed in Conditions LUBA and HUBA under SMSE play.

Each bidder either switches or not on each of 29 pairs of adjacent rounds, independently

of the previous winning bid. Therefore, the total number of switches per bidder is

binomial. The probability of not switching is given by conditioning on the result of a bid:

25 25 p 2 . Then, the expected number of switches is µ = 29 1( − p 2 ) and the ∑i=1 i w ∑i=1 i

25 associated standard deviation is σ = 29 1( − p 2 ) . The mean number of switches in w ∑i=1 i

bids was computed to be 25.74 for Condition LUBA and 23.33 for Condition HUBA.

The corresponding standard deviations were 1.70 and 2.14. Figure 2.8 shows the

predicted probability distributions of the number of switches in bid for Conditions LUBA 61 and HUBA, respectively. The distribution for Condition LUBA is slightly skewed to the left whereas the distribution for Condition HUBA is almost normally distributed.

Pr( 22 ≤ w ≤ 29 ) for Condition LUBA and Pr( 18 ≤ w ≤ 28 ) for Conditions HUBA are

approximately 0.99. In my analysis, I assume that a subject satisfies Requirement (B) if

her number of switches in bid falls in the 99% central interval.

For each condition, each of the 50 subjects was classified into the following four categories based on which requirement(s) she satisfied: subjects who satisfied both

Requirements (A) and (B), subjects who only satisfied Requirement (A), subjects who only satisfied Requirement (B), and subjects who satisfied neither of the two requirements. Tables 2.7 and 2.8 summarize the results of classification for Conditions

LUBA and HUBA, respectively. Recall that the null hypothesis of SMSE play on the individual level was not rejected by the K-S test for 29 of the 50 subjects (58%) for

Condition LUBA and 23 of the 50 subjects (46%) for Condition HUBA in the first 30 rounds. In the last 30 rounds, the corresponding number was 22 (44%) for Condition

LUBA and 20 (40%) for Condition HUBA. These subjects fall in either the category of satisfying Requirement (A) or the category of satisfying both Requirements (A) and (B).

In the first 30 rounds, the number of subjects who satisfied both requirements, i.e., played the SMSE, was 20 of the 50 subjects (40%) in each of the two conditions. In the last 30 rounds, this number declined to 15 (30%) in Condition LUBA and 13 (26%) in Condition

HUBA. Therefore, in each of the two conditions, only a minority of the 50 subjects followed the SMSE both in the first and last 30 rounds. Also, the number of those who 62 played the SMSE decreased over time. These results yield no support for the two hypotheses suggested at the beginning of this section.

Another observation is that in each of the two conditions the number of subjects who satisfied neither of the requirements increased by about 80% in the last 30 rounds

(18 and 24 in Conditions LUBA and HUBA, respectively), compared to the corresponding number in the first 30 rounds (10 and 13 in Conditions LUBA and HUBA, respectively). One possible explanation is as follows. Subjects may have started the session by placing a different bid on almost every round. Then, as they gained more experience with the auction, they may have attempted to learn about other subjects’ bidding patterns by staying on the same bid for short sequences of rounds and exploited the information for future rounds. Therefore, they may have had a stronger inclination to choose the same bid over short sequences of rounds in the last 30 rounds than in the first

30 rounds. By sticking to the same bid over many iterations of the auction, subjects do not only keep the number of switches in bid very small (i.e., violation of Requirement

(B)) but also generate their bid distributions that deviate significantly from the predicted distribution (i.e., violation of Requirement (A)).

To illustrate the switching patterns of individual subjects, Figures 2.9 and 2.10 portray the 60 bids of each of the 50 subjects in Conditions LUBA and HUBA, respectively. Each individual graph plots the bids (y-axis) by round (x-axis). In Condition

LUBA, the number of subjects who switched as many times as predicted under equilibrium play was 31 (62%) and 25 (50%) of the 50 subjects in the first and last 30 rounds, respectively. In Condition HUBA, the corresponding number is 34 (68%) in the 63 first 30 rounds and 19 (38%) in the last 30 rounds. In both conditions, subjects showed a strong tendency to switch fewer times in the last 30 rounds than in the first 30 rounds.

Some subjects switched remarkably fewer than predicted. For example, Subject 7 of

Session 1 in Condition LUBA switched only 6 times in the first 30 rounds and never switched in the last 30 rounds (see Table 2.5). This subject continued choosing bid 3 from round 7 until the end of the session. Her switches in bid occurred in the first 6 pairs of adjacent rounds. Similarly, subject 2 of Session 4 in Condition HUBA switched only 5 times in the first 30 rounds and never switched in the last 30 rounds.

What affected the bidding behavior of individual subjects during the experiment?

As seen before, some subjects stuck to the same bid for a long period of rounds (e.g.,

Subject 7 in Session 1 of Condition LUBA). At the same time, the data show that some subjects tended to choose a higher (lower) bid if the winning bid of the previous round was high (low), which is also documented by Östling et al. (2007). To analyze to what extent the subjects’ bidding behavior at a current round relied on the previous winning bids, I conducted a fixed effects regression separately for each of the five sessions of each condition. As the independent variables I included lagged values of the winning bid.29

Table 2.9 reports the regression results. No general trend in the bidding behavior of subjects was observed in both conditions. In Condition LUBA, the bidding behavior of subjects in Sessions 2, 3, and 4 shows a significant dependency on the winning bid of the previous round; they tend to submit higher bids when the winning bid was high in the previous round. The twice lagged winning bid has a significant effect on the bidding

29 The current round number to control for time trend was not included because neither systematic nor replicable individual bidding pattern was discernable over the 60 rounds (see Figures 2.9 and 2.10). 64 behavior of subjects in Session 1. No influence of the past winning bids on subjects’ current bidding behavior was observed in Session 5.

In Condition HUBA, similarly to Condition LUBA, in some sessions subjects tended to choose a higher (lower) bid when the winning bid was high (low) in the previous round (e.g., Sessions 2, 4, and 5). Subjects’ bidding behavior was also influenced by the twice lagged winning bid in Sessions 4 and 5. There was no significant effect of the past winning bids on the bidding behavior of subjects in Sessions 1 and 3.

2.6.3 Discussion

By analyzing the bid patterns of subjects in the first and last 30 rounds separately

for LUBA and HUBA, it is observed that a majority of the subjects deviated significantly

from SMSE play. Rather, only a minority of the subjects followed SMSE play, and the

number of such subjects became even smaller in the last 30 rounds. The subjects who

deviated from equilibrium play did so by switching their bids between rounds less

frequently than predicted, and this tendency was on average strengthened in the last 30

rounds. Not even a single subject placed the same bid in all 60 rounds; the lowest number

of switches per subject that was recorded is 6 in Condition LUBA (subject 7 of Session 1)

and 5 in Condition HUBA (subject 2 of Session 4). Both stayed on the same bid on the

last 30 rounds. Rather than switching their bid on almost every round, most subjects often

placed the same bid for short sequences of rounds, perhaps in an attempt to discover the

ever changing patterns of bids and then exploit this information by best responding with a

different bid. 65

On the group level, and even more so across all the groups, the equilibrium

solution accounted for the distribution of bids in the LUBA very well. I observe

heterogeneous patterns of bidding behavior on the individual level coupled with

systematic and replicable patterns of bidding on the group and aggregate levels that seem

to differ very little from equilibrium play. This is no longer the case when the rule for

winning is changed by choosing the highest, rather than lowest, unique bid. When

participating in the HUBA, the subjects deviated from the equilibrium solution by

occasionally bidding below the predicted minimum bid. This tendency of stretching the

bids was somewhat weakened over time, which resulted in a smaller discrepancy between

the bidding behavior on both the aggregate and group levels and the equilibrium solution

in the last 30 rounds.

2.7 Conclusion

Chapter 2 explored a recently emerged auction called the unique bid auction. In a sharp contrast to the traditional auctions, the winning bid must be unique; the winning bid is the lowest (highest) unique bid in the lowest (highest) unique bid auction. This new feature makes apparently no connection between the winning bid and value of the prize, which may lead people to consider the unique bid auctions as lotteries. The unique bid auction has been becoming popular around the world.

I have constructed the equilibrium solutions for the LUBA and HUBA, which, as presented in Section 2.3, have several details not shared by real unique bid auctions.

Rather, the games were framed as unique bid auctions with no participation fee in which the winner is paid her bid. The solutions assume that the number of bidders is commonly 66 known. To achieve tractability, the restriction was imposed that each player can only place a single bid. I have presented a procedure for numerically computing the probability distribution of bids that can be used with both the LUBA and HUBA.

Theoretically, it is not limited by the number of players or the number of strategies.

However, it is restricted in practice mostly by the number of players, as computation time increases exponentially in n.

Two experiments were conducted, namely Conditions LUBA and HUBA, that

differed from one another with respect to how to determine the winning bid. Taken

together, these experiments resulted in three major findings. First, only a minority of the

subjects generated sequences of bids across iterations of the auction that did not deviate

significantly from mixed-strategy equilibrium play. The major reason for deviating from

equilibrium play was the inclination of some subjects to repeat the same bid too

frequently. Second, in most cases the bidding behavior on the group and aggregate level

for the LUBAs did not deviate significantly from equilibrium play. Similar results of

heterogeneous patterns of bidding behavior on the individual level coupled with

systematic and replicable behavior on the aggregate level that adheres to the symmetric

mixed-strategy equilibrium have been reported in previous studies of market entry

behavior (e.g., Rapoport, Seale, & Winter, 2002; Seale & Rapoport, 2000) and arrival

times in single-server queues (Rapoport et al., 2004). Thirdly, subjects’ bidding behavior

on the group or aggregate level for the HUBAs did deviate significantly from equilibrium

play due to a minority of bids that were placed below the values predicted to be chosen

by the equilibrium solution. 67

These findings suggest two directions in which additional experimental research on unique bid auctions might proceed. The first direction is to test the difference between the LUBA and HUBA more extensively by using different group sizes and different strategy spaces. A second direction is to frame the experimental games as auctions with exogenous prizes or Dutch auctions with no observability of bids. The results on private value auctions reported by Turocy et al. (2007), who tested and consequently rejected the null hypothesis that alternative framings of strategically equivalent games as first-price sealed bid auctions and Dutch auctions result in the same bidding behavior, suggest that the particular framing of auctions matters. A third direction is to endogenize the number of bidders by charging a participation fee. 68

CHAPTER 3: DISCRETE BOTTLENECK GAMES

3.1 Introduction

The seminal paper on urban traffic congestion by Vickrey (1969) assumes that

congestion on a single road takes on the form of multiple cars queueing behind a

bottleneck. Vickrey’s major contribution has been to endogenize the departure time

decisions and to let the evolution of congestion over the rush hour be determined within

the model (Arnott et al., 1998). Vickrey considered a situation, quite typical of morning

rush hour, where a fixed and very large number of identical commuters have to travel

from a single origin (e.g., home) to a single destination (e.g., work) along a single road.

This road has a single bottleneck with a fixed and commonly known capacity. If the

arrival rate at the bottleneck exceeds its capacity, a queue forms. Although all the

commuters wish to arrive at the common destination at the same time, this is not

physically possible because the bottleneck capacity is finite. Consequently, some

commuters must arrive early and incur the costs of waiting whereas others may arrive late

and pay the penalty for doing so. As noted by Arnott et al. (1990, 1998), in determining

her departure time each commuter faces a tradeoff between journey time and schedule

delay (early or late arrival at her destination). She can choose to depart in the tails of the

rush hour when journey time is relatively low and incur the cost of arriving at work early

or late. Alternatively, she can choose to depart at the peak hour when travel time is

relatively high but schedule delay costs are low.

Vickrey’s bottleneck model was independently formulated by Hendrickson and

Kocur (1981), and subsequently extended by Smith (1983), Daganzo (1985), and 69 influential papers by Arnott et al. (1990, 1993). In all of these formulations, the commuters are treated as a continuum. In making this assumption, these researchers have followed a common practice in transportation science and economics to use continuous models for analyzing phenomena that are essentially discrete. Quite often, but not always, the predictions derived from the continuous model provide good approximations to the phenomena that are discrete in nature. But in some cases (e.g., Swarthout and Walker,

2007), differences between the continuous and discrete versions of the same model may matter. Swarthout and Walker give as an example the simple Cournot model in which the continuous version of the model yields a unique equilibrium whereas the discrete version may have several pure-strategy equilibria. A second, more dramatic example is of a single economy with a public good in which the well-known mechanism proposed by

Groves and Ledyard (1977) is used to determine how much each participant will pay to finance the public good and, consequently, how much of the public good will be provided.

Swarthout and Walker show that in this case the correspondence between the continuous and discrete versions of the same model fails. In the case of continuous strategy spaces, the mechanism has a unique Pareto optimal equilibrium. But when the strategy spaces are discrete, in general the mechanism has multiple pure-strategy equilibria, only a small fraction of them are Pareto optimal. They conclude that one could easily go astray using continuous models to predict outcomes in discrete implementations.

But the issue is not only the goodness of the approximation. Whereas roadway congestion has normally been examined in contexts including thousands of commuters, where the effect of each commuter is negligible, congestion may involve a relatively 70 small number of commuters who cause negative externalities. Examples of congestion that only involve a relatively small number of commuters are common in transportation markets and queueing (e.g., Glazer and Hassin, 1987). In many small company-owned or mining communities, many of the inhabitants who work at the same factory or same mine on the same shift, wish to arrive to their destination at the same time. Bridges, tunnels, and security checks will result in bottlenecks. Another example is of “lead time” in supply chain management. Here, consumers order a new product from a manufacturer for some special occasion (e.g., Christmas). If it takes time to produce a unit of the new product because of short supply, costly delays (“congestion”) may arise. Check-in for departure on international flights by passengers wishing to check-in their luggage is yet another example of congestion at the bottleneck, common arrival time (same plane departure), and schedule costs. Other examples that have motivated this study include economics and transportation laboratory experiments designed to assess the descriptive power of equilibrium models of endogenous departure time in traffic networks (e.g.,

Daniel et al., 2007; Schneider and Weimann, 2004; Stein et al., 2007; Ziegelmeyer et al.,

2008). These laboratory experiments only study a relatively small number of participants.

In the present formulation, that continues previous research on the micro-foundations of congestion in traffic networks with endogenous arrivals (Levinson, 2005; Zou and

Levinson, 2006), traffic congestion is modeled as a non-cooperative n-person game with identical commuters and a finite strategy space. Equilibrium obtains when no commuter has an incentive to alter her departure time, given that all the other commuters adhere to 71 equilibrium play. Because schedule delay cannot be the same for all commuters, they must adjust their travel time over the rush hour to satisfy the equilibrium condition.

The motivation for this work is twofold. First, as mentioned earlier, the continuous and discrete versions of the same model yield different results. I wish to compare the two formulations in order to determine how good the continuous approximations are to the associated discrete case. Second, I wish to develop a model to account for the experimental implementation of the bottleneck game in the laboratory.

Whether in an experiment or even in naturally occurring settings (see, e.g., Levinson,

2005), in order to implement a mechanism or test a theory one has to use a discrete strategy space and a finite number of commuters. Moreover, as the number of participants in experiments and in some traffic congestion applications (see above) is typically small, the approximations provided by the continuous model may not be satisfying.

The rest of this chapter is organized as follows. Section 3.2 introduces notation

and then describes Vickrey’s continuous model and the deterministic equilibrium solution

constructed by Arnott et al. (1990). I do not review previous research that formulated the

bottleneck situation as a continuous model. Rather, in Section 3.3 I present and briefly

discuss three recent papers by Levinson (2005), Zou and Levinson (2006), and

Ziegelmeyer et al. (2008) that focus on the discrete version of the bottleneck model. In

Section 3.4 that constitutes the main section of the chapter, a numerical procedure is

presented for computing a symmetric mixed-strategy equilibrium solution for the discrete

version of the bottleneck model. Using a non-stationary Markov chain approach, I 72 conclude this section with the presentation of an algorithm for computing the equilibrium probabilities. Section 3.5 compares the continuous and discrete versions of the bottleneck model in terms of travel time and travel cost. In Section 3.6, the discrete model is extended to the case where the number of commuters is a random variable whose distribution is commonly known, and a second case where an alternative transportation mode that is not subject to congestion is available. The equilibrium solutions to these two extensions are computed and illustrated. Section 3.7 concludes with a brief discussion.

3.2 Vickrey’s Continuous Bottleneck Model

In the model of Arnott et al. (1990), a fixed number, n, of identical commuters

travel every morning from home ( O—origin) to work ( D—destination). They do so along a single road with a bottleneck. In this model, commuters are treated as a continuum of measure n. All of them wish to arrive at the same destination at time t* . Travel is not congested except at the bottleneck at which at most s commuters can pass per unit time.

If the rate of arrival at the bottleneck exceeds s, then a queue develops behind the bottleneck.

Travel time from O to D is denoted by Tf+T(t), where Tf is the fixed component of

travel from O to D. With no restriction, assume that Tf=0 implying that a commuter

arrives at the bottleneck as soon as she leaves home, and arrives at work as soon as she

leaves the bottleneck. T(t) is the waiting time at the bottleneck, and t is the departure time

from home. Let D(t) denote the length of the queue at time t. Then,

t D(t) = ∫ r(u)du − s(t − tˆ) , tˆ 73 where r(t) is a departure function, and tˆ is the most recent time at which there was no queue. Travel time by departing at time t is computed from

D(t) T (t) = . s In words, the commuter’s travel time equals queue length at the time she joins the queue divided by the service rate of the bottleneck. This equation also implies that she passes through the bottleneck instantaneously once her turn comes.

Following Vickrey (1969), Arnott et al. assume that the cost of the trip, denoted by C, is linear in journey time and schedule delay:

C(t) = travel time costs + time early costs + time late costs

= αT(t) + β max{ ,0 t * − (t + T (t))} + γ max{ (,0 t + T(t)) − t *}, where, as defined earlier, t* is the desired arrival time. 30 Each commuter independently chooses a departure time, t, to minimize her travel cost.

Equilibrium obtains when no commuter has an incentive to unilaterally alter her departure time. Arnott et al. show that in equilibrium the commuters depart at a piecewise constant rate given by

 αs for t ∈[t ,t ) α − β F O =  r(t)  α  s ∈ for t (tO ,t L ] α + γ where t F and t L are the times at which the first and the last commuters depart, + = * respectively. tO is such a time that tO T(tO ) t . Solving the following equations simultaneously yields t F , t L , and tO :

30 In accordance with empirical results by Small (1982), Arnott et al. assume that γ>α>β. The assumption that γ>α is not required to assure existence of a pure-strategy equilibrium. 74

n t − t = , L F s β * − = γ − * (t t F ) (t L t ) ,

β t + (t − t ) = t * . O α − β O F

The first equation specifies that the length of congestion is n/s, the second equation states

that the travel costs of the first and last commuters are equal, and the last equation

follows from the definition of tO . Then,

 γ  n  = * −   t F t    ,  β + γ  s 

 β  n  = * +   t L t    ,  β + γ  s 

 βγ  n  = * −   tO t    . α(β + γ )  s 

β * − The travel cost ( C) of departing at t F is (t t F ) because there is no congestion at the bottleneck. Therefore, the travel cost is

 βγ  n  C =    .  β + γ  s 

Since all the commuters have the same travel cost of departing at the times between t F

and t L , the total travel cost ( TC ) is given by

 βγ  n 2  TC = nC =    .  β + γ  s  The total travel time ( TTT ) is computed as 75

tL  βγ  n 2  TC TTT = D(t)dt =    = . ∫ 2α(β + γ ) s 2α tF   

Therefore, the total travel time cost ( TTC ) is given by

TC TTC = α ×TTT = . 2 Brief comments on the results obtained by Arnott et al. are in order. First, notice

that t F , t L , C, TC , and TTC are all independent of α. Recall that the first and the last commuters never encounter congestion at the bottleneck. Thus, any change in α does not

influence their departure times, namely t F and t L , and their associated travel costs. Since all the commuters must have the same travel cost as the first and the last commuters, C and TC must be independent of α. The total travel time is proportional to 1/ α so that the total travel time cost remains independent of α. Second, in the model of Arnott et al., the total travel time cost is half of the total travel cost. Thirdly, the equilibrium is deterministic; because the commuters are treated as a continuum and time is continuous, they can choose their departure times in a way that the travel cost of each commuter is constant over the rush hour.

3.3 Review of Previous Literature

Only a few recent studies have attempted an equilibrium analysis of the discrete version of the bottleneck model. Levinson (2005) studied two non-cooperative games: one with two commuters who only have three choices of departure time (early, on-time, late), and the other with three commuters who only have six choices of departure time

(very early, early, on-time, late, really late, super late). In a subsequent study, Zou and 76

Levinson (2006) extended the previous model into a non-cooperative n -person game based on several simplifying assumptions. Their analysis is based on a restrictive assumption, due to the computational difficulties that they encountered, that the number of commuters, n, is smaller than or equal to seven. Zou and Levinson also limited the number of strategies (departure times) to n+1. Common to both studies are the following assumptions: (i) the service capacity per unit of time is a single commuter, (ii) ties are broken randomly among commuters arriving at the same time, and (iii) the analysis is restricted to pure-strategy Nash equilibria. 31

The study of Ziegelmeyer et al. (2008) is the most closely related to the present study. To the best of my knowledge, Ziegelmeyer et al. are the first to construct a symmetric mixed-strategy Nash equilibrium of the discrete version of the bottleneck model. They first characterized pure-strategy Nash equilibria, and then constructed symmetric mixed-strategy equilibrium solutions for their laboratory experiments. Just as in the first two studies by Levinson and by Zou and Levinson, Ziegelmeyer et al. assume in their main study that the service capacity per unit of time is a single commuter. This is a restrictive assumption, which is relaxed in the next section, that considerably reduces computational complexity. A special feature of their model is the handling of ties. They handle ties as follows. Suppose that a commuter arrives at the bottleneck with j other

commuters at time t. Then, in order to pass through the bottleneck, each of the ( j+1) commuters is assumed to spend ( j+1) units of time if the bottleneck is not congested at all

31 Nash (1951) proved that every n-person game with a finite number of players and finite strategy space possesses at least one equilibrium in pure or mixed strategies. A pure-strategy Nash equilibrium may not exist in some finite games (e.g., the Matching Pennies game). See, e.g., Osborne and Rubinstein (1994). 77 at time t, and ( j+1+ v) units of time if v other commuters are waiting in a queue behind the

bottleneck at time t, respectively. The present model does not impose any restriction on

the service capacity. Also, in contrast to Ziegelmeyer et al., it invokes the more natural

assumption that ties are broken randomly with equal probability among the commuters

who arrive at the bottleneck simultaneously.

3.4 Discrete Bottleneck Game

3.4.1 Model

Vickrey’s model of departure time (as elaborated by Arnott et al., 1990, 1993) is formulated as follows. There are n identical commuters who travel along a single road connecting a common origin O and a common destination D. Each commuter

∈ * independently and simultaneously chooses a departure time, t 1{ ,..., t ,..., tmax } . As

before, travel is assumed to be uncongested anywhere except at a single segment of the

road called a bottleneck . A first-come first-served (FCFS) queue discipline is applied at the bottleneck. Denote by s (>0) the service capacity per unit of time. If s≥1, then at most s commuters are served per unit time. If s<1, service capacity is constrained to take

values of the form s=1/ d, where d is an integer larger than 1. Then, only a single

commuter is served at a time, and it takes each commuter d units of time to pass through

the bottleneck. 32 It is assumed that for any s, if multiple commuters arrive at the bottleneck simultaneously, then they are served in a random order with equal probability.

32 The parameter d can be interpreted as the service time per commuter, i.e., the number of units of time for one commuter to pass through the bottleneck. 78

The travel cost of a commuter departing at time t consists of three types of cost:

travel time cost, early arrival cost, and late arrival cost. The linear cost structure of Arnott

et al. is maintained: a commuter’s travel cost is linear in travel time, early arrival time,

and late arrival time.

Table 3.1 presents two examples, one for s=1/3 (i.e., d=3) and the other for s=3, that illustrate the discrete bottleneck game and computation of the travel costs. Let n=10,

α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60}, and t* =50. For both cases, the departure times, travel time, arrival time, and travel cost are listed in columns 2, 3, 4, and 5, respectively. In the example for s=1/3 (top panel), there is a tie among commuters 2, 3, and 4, who departed home at time 34. This tie was broken randomly (with probability 1/3), so that commuter 2 was the first to be served, Commuter 3 was the second, and commuter 4 was the third.

Consequently, commuters 3 and 4 had to spend 6 and 9 units of time at the bottleneck, respectively, while commuter 2 only spent 3 units of time. Commuters 1 and 2 never encountered congestion at the bottleneck. On the other hand, commuter 10 had to join a queue formed by commuters 7, 8, and 9 when she arrived.

In the example for s=3 (bottom panel), four commuters departed at time 48, and six others at time 49. The first and second ties were randomly broken with probabilities

1/4 and 1/6, respectively. Among the first four commuters, commuter 4 was randomly selected to wait for an additional time unit and pass though the bottleneck with commuters 5 and 6, who departed at time 49. Commuters 7, 8, and 9 had to wait in the queue for an additional unit of time and then travel the bottleneck together. Although 79 commuter 10 departed at time period 48, she had to wait in the line for 2 units of time before being served.

3.4.2 Computational Procedure

Depending on its parameter values, the discrete bottleneck game may or may not possess pure-strategy Nash equilibria. If pure-strategy Nash equilibria exist, then they are asymmetric with some commuters departing early and others departing late. 33 Because the n commuters are assumed to be identical, the model focuses on symmetric equilibria.

Dasgupta and Maskin (1986) prove in their Lemma 6 that a finite symmetric game

possesses a symmetric mixed-strategy Nash equilibrium (SMSE). Therefore, there exists

at least one SMSE in the discrete bottleneck game.

Denote by p a (symmetric) mixed strategy of a commuter. That is,

p = ( p , p ,..., p ) , where p is the probability that a commuter who uses the mixed 1 2 tmax t strategy p departs from the origin (i.e., arriving at the bottleneck) at time t. Let one of the

n commuters be a designated commuter. The expected travel cost for this commuter for each departure time t is computed and used to solve for the equilibrium probabilities

p , p ,..., p . Note that each of the n-1 other commuters is assumed to independently 1 2 tmax use the mixed strategy p.

To construct the equilibrium probabilities, once again I use a non-stationary

Markov chain. Using an indirect approach, it is possible to compute the mixed-strategy equilibrium for a considerably larger number of commuters than in the previous studies

33 In general, it is too complicated to fully characterize the set of pure-strategy Nash equilibria of the discrete bottleneck game. It may require certain restrictions on the set of strategies (i.e., departure times) and parameter values. Ziegelmeyer et al. (2008) identify the set of pure-strategy Nash equilibria under suitable restrictions on parameter values. 80 by Levinson (2005), Zhu and Levinson (2006), and Zieglemeyer et al. (2008). To this end,

I need to define a proper state space that describes the stochastic nature of the traffic network. Two cases are considered: s<1 and s≥1.

Case 1: s<1 As defined earlier, service capacity takes values of the form s=1/ d, where d is an integer larger than 1. This means that the service time per commuter is d units of time.

Note that for any time t, commuters can be in one of four locations in the system, namely,

either at the origin, in a queue behind the bottleneck (i.e., waiting for service), within the

bottleneck (i.e., receiving service), or at the destination. Denote by Ω a set of possible

ω ∈ Ω states and by t a state at time t (more precisely, a state of the system immediately

after all movements of the commuters occurred at time t). Each state is a vector of three elements. The first and the second elements specify the numbers of commuters (out of the n-1 commuters) at the origin and in a queue behind the bottleneck, respectively. They

take on integer values from 0 to n-1. The third element is used to keep track of the time

periods elapsed since the last commuter was served in the bottleneck; it takes on one of

the integer values 0, 1, …, d-1. The value 0 indicates that no commuter is currently being

served, whereas d-1 indicates that d-1 units of time elapsed since the last commuter was

ω = served. For example, suppose that d=5, and t−1 4[ 2 ]4 . If none of the four

ω = commuters at the origin departs at time t, then the state at t is t 4[ 2 ]0 . Note that the number of possible states is computed from

n(n + )1 (n − )1 n Ω = + (d − )1 .34 2 2

34 If the third element is zero, it implies that all of the n-1 commuters are not being served in the bottleneck. Then, the number of possible states is n(n+1)/2. On the other hand, if the third element takes a larger 81

ω = Suppose that t−1 [u v e ] , i.e., at time t-1, u commuters were at the origin, v

commuters were waiting in a queue behind the bottleneck, and e units of time elapsed since a commuter was served. Suppose that the designated commuter and j commuters

(out of u) depart from the origin (i.e., arrive at the bottleneck) at time t. Let k denote the

number of commuters (out of j) being served before the designated commuter. Then, the

ω ω designated commuter’s travel time of departing at time t, given t−1 and k, T(t | t−1 , k) , is computed from one of the following eight cases:

d if u = ,0 v = ,0 e = 0  + − = > = dv (d )1 if u ,0 v ,0 e 0 d(v + )1 + (d − e − )1 if u = ,0 any v, 0 < e < d −1  d(v + )1 if u = ,0 any v, e = d −1 ω =  T(t | t−1 , k)  d(k + )1 + (d − )1 if u > ,0 v = ,0 e = 0 d(v + k) + (d − )1 if u > ,0 v > ,0 e = 0  d(v + k + )1 + (d − e − )1 if u > ,0 any v, 0 < e < d −1  d(v + k + )1 if u > ,0 any v, e = d −1

Case 2: s≥1 In this case, at most s commuters can pass through the bottleneck per unit

time. A state of the system can be represented by a vector that only has two elements: the

first and second elements state the numbers of commuters (out of the n-1 commuters) at

the origin and in a queue behind the bottleneck. For example, suppose that s=2, and

ω = t−1 3[ ]3 . If none of the three commuters at the origin departs at time t, then the state

ω = of the system at time t is t 3[ ]1 . Since each element takes an integer value 0 to n-1, and the sum of the two elements cannot exceed n-1, the number of possible states is

integer than zero (there are d-1 cases), then one of the n-1 commuters is being served. In this case, the number of possible states is ( d-1)( n-1) n/2. Hence, the total number of possible states is n(n+1)/2+( d-1)( n- 1) n/2. 82

n(n + )1 Ω = . 2 ω = Suppose that t−1 [u v ], and that the designated commuter and j commuters

(out of u) depart from the origin (i.e., arrive at the bottleneck) at time t. Let k denote the

number of commuters (out of j) waiting before the designated commuter. Define by g(a)

a function that rounds a number a to the nearest integer greater or equal to a. Then, the

ω ω designated commuter’s travel time of departure at t, given t−1 and k, T(t | t−1 , k) , is given by one of the following four cases:

1 if u = ,0 v ≤ s   v − s +1 g  if u = ,0 v > s   s   + T(t | ω − , k) =   k 1 t 1 g  if u > ,0 v ≤ s   s    v − s + k +1 g  if u > ,0 v > s   s 

ω = For example, suppose that s=2, t−1 4[ ]5 , and k=3. If the designated commuter departs at t, then his travel time is g(( 5 − 2 + 3 + )2/)1 = g )5.3( = 4 units of time.

The number of departures at t follows the binomial distribution with parameters u

pt pt and ht, where h = = . h is the probability of departure at t, t p + ... + p 1− p t t T ∑τ

given that departure before t did not occur. Then, the probability of j other commuters

u =   j − u− j departing at t is computed from f (u, j, ht )  (ht ) 1( ht ) . When j other commuters  j depart at t, the designated commuter becomes the ( k+1) th commuter among (j+1) 83 commuters with probability 1/( j+1). Then, the designated commuter’s expected travel

ω cost, given t −1 and k, is

1 j C(t | ω , j) = C(t | ω , k) , t−1 + ∑ t−1 j 1 k=0 where, as before,

C(t | ω − ,k) = αT (t | ω − , k) t 1 t 1 . + β * − + ω + γ + ω − * max{ ,0 t (t T (t | t−1 ,k))} max{ (,0 t T (t | t−1 , k)) t }

ω Denote by C t,( p | t−1 ) the designated commuter’s expected travel cost of departure at time t when each of the other n-1 commuters uses the mixed strategy p and the state at t-1

ω is t−1 , which is computed from

u ω = ω C t,( p | t−1 ) ∑ f (u, j, ht )C(t | t −1 , j) . j=0 To determine the designated commuter’s expected travel cost of departure at t, the

probability distribution over possible states at time t-1 must be derived. Denote by P )0(

a 1× Ω initial vector, whose elements are probabilities over possible states at time 0.

Note that all the n-1 commuters are at the origin at t = 0 . Therefore, the probability that

= all the n-1 commuters are at the origin is 1, i.e., P[n−1 0 ]0 )0( 1 if s<1, and

= ≥ P[n−1 ]0 )0( 1 if s 1. The other elements in P )0( take the value of 0.

For t ≥ 1 , define a Ω × Ω transition matrix P(t − ,1 t ) with elements

− = ω = ω = Px,y (t ,1 t) P( t y | t−1 x) . To construct a transition matrix, all possible 84 transitions from one state to another must be considered. 35 Then, the 1× Ω row vector

that constitutes the probability distribution over states at time t-1 is obtained by the

following matrix multiplication:

P(t − )1 = P )0( P )1,0( P 2,1( )... P(t − ,2 t − )1 .

Then, the designated commuter’s expected travel cost of departure at t when each of the other n-1 commuters uses the mixed strategy p is computed from

= − ω C t,( p) ∑ Pω (t )1 C t,( p | t−1) . t−1 ω ∈Ω t−1 To compute the SMSE, note that the FCFS queue discipline is used and thereby future arrivals cannot affect the costs of those commuters who have already arrived at the bottleneck. Thus, the expected travel cost of departure at time t is a function of the mixed

strategy only through the probabilities p1 , p2 ,..., pt . To determine pt , p1 , p2 ,..., pt−1 are

fixed and pt is varied. Since p1 , p2 ,..., pt−1 are fixed, the designated commuter’s expected travel cost of departing at time t when each of the other n-1 commuters chooses p is rewritten as

= − ω C t,( pt ) ∑ Pω (t )1 C t,( pt | t−1) . t−1 ω ∈Ω t−1

Notice that for all t, C t,( p ) is continuous on 1,0[ − pτ ] . Then, the following t ∑τ

> ~ > ~ Theorem 3.1 Given p1 ,..., pt−1 , if pt pt , then C t,( pt ) C t,( pt ) .

35 It is impossible for some transitions to take place. For example, consider the case when s<1. Then, state [3 0 0] cannot be reached from state [2 1 0]. Thus, the probability of such a transition is 0. 85

− Proof of Theorem 3.1 Since p1 ,..., pt−1 are fixed, all the components of P(t )1 , i.e., probabilities over possible states at t-1, are determined. Thus, to prove that

> ~ ω C t,( pt ) C t,( pt ) , it suffices to show that, for any t −1 ,

u u ω = ω ≥ ~ ω = ~ ω C t,( pt | t−1 ) ∑ f (u, j,ht )C(t | t−1 , j) ∑ f (u, j,ht )C(t | t−1 , j) C t,( pt | t−1 ) , j=0 j=0

ω ω = ω = with strict inequality for some t −1 . Recall that t −1 [u v e ] if s<1 and t −1 [u v ]

if s≥1. All possible states at t-1 are divided into two exclusive cases with respect to the

value of u: u=0 and u>0.

ω = ω ~ ω = ω Case 1: u=0 Since C t,( pt | t −1) C(t | t −1 )0, and C t,( pt | t −1) C(t | t −1 )0, ,

ω = ~ ω C t,( pt | t −1) C t,( pt | t −1) .

ω Case 2: u >0 RearrangeC t,( pt | t −1 ) :

u ω = ω C t,( pt | t −1) ∑ f (u, j,ht )C(t | t −1, j) j =0

= ω + ω − ω + + ω − ω f (u ,0, ht ){ C(t | t −1 )0, C(t | t −1 )1, C(t | t −1 )1, ... C(t | t −1,u) C(t | t −1,u)}

+ ω + ω − ω + + ω − ω f (u ,1, ht ){ C(t | t −1 )1, C(t | t −1 )2, C(t | t −1 )2, ... C(t | t −1,u) C(t | t −1,u)}

+ .....

+ − ω − + ω − ω f (u,u ,1 ht ){ C(t | t −1,u )1 C(t | t −1,u) C(t | t −1,u)}

+ ω f (u,u,ht )C(t | t −1,u)

u−1  j  = ω −  ω + − ω  C(t | t −1,u) ∑{C(t | t −1, j )1 C(t | t −1, j)} ∑ f (u,k,ht ) j =0  k =0  86

j ≤ ≤ − ω + > ω For 0 j u 1, (a) C(t | t −1, j )1 C(t | t −1, j ) and (b) ∑ f (u,k,ht ) is increasing in k=0

> ~ ω > ~ ω ht , in other words, in pt . Therefore, if pt pt , C t,( pt | t −1) C t,( pt | t −1 ) .

> ~  By combining the results of the two exclusive cases, C t,( pt ) C t,( pt ) .

The intuition behind this theorem is that the bottleneck will stochastically become

more congested as pt increases, and thereby C t,( pt ) is strictly increasing in pt . This fact

will be used to search for values of pt .

Theorem 3.2 Suppose that C is the equilibrium expected travel cost of the

discrete bottleneck game. Then, there exists a unique symmetric

mixed-strategy Nash equilibrium for the game.

Proof of Theorem 3.2 Suppose that there are two symmetric mixed-strategy Nash equilibria, p = ( p ,..., p ) and ~p = (~p ,..., ~p ) , each of which yields the same 1 tmax 1 tmax

equilibrium expected travel cost, C .

Consider t=1. Recall that all the n-1 commuters are at the origin at t=0, i.e.,

= = ≥ P[n−1 0 ]0 )0( 1 if s<1, and P[n−1 ]0 )0( 1 if s 1. Since all the components of P )0( , i.e.,

probabilities over possible states at t=0, are the same for the two symmetric mixed-

= ~ = ω strategy Nash equilibria, C ,1( p1 ) C ,1( p1 ) C implies that for any 0 ,

n−1 n−1 − ω = − ~ ω ∑ f (n ,1 j,h1)C |1( 0 , j) ∑ f (n ,1 j,h1)C |1( 0 , j) . j =0 j =0

= ~ = ~ = ~ = ~ Therefore, h1 h1 . Since h1 p1 and h1 p1 , p1 p1 . 87

= ~ Consider t=2. Given that p1 p1 , all components of P )1( are the same for the

= ~ = two symmetric mixed-strategy Nash equilibria. Then, C ,2( p2 ) C ,2( p2 ) C implies

ω that for any 1 ,

n−1 n−1 − ω = − ~ ω ∑ f (n ,1 j,h2 )C |2( 1, j) ∑ f (n ,1 j,h2 )C |2( 1, j) . j =0 j =0

= ~ = − ~ = ~ − ~ = ~ Therefore, h2 h2 . Since h2 p2 /( 1 p1 ) and h2 p2 /( 1 p1 ) , p2 p2 .

= ~ τ < < ≤ − Suppose that pτ pτ for t 2( t tmax ) . Then, all components of P(t )1

are the same for the two symmetric mixed-strategy Nash equilibria. Then,

= ~ = ω C t,(pt ) C t,( pt ) C implies that for any t −1 ,

n−1 n−1 − ω = − ~ ω ∑ f (n ,1 j,ht )C(t | t −1, j) ∑ f (n ,1 j,ht )C(t | t −1, j) . j =0 j =0

~ ~ ~ ~ ~ Therefore, h = h . Since h = p /( 1− pτ ) and h = p /( 1− pτ ) , p = p . t t t t ∑τ

= ~ = ~ τ < = ~ Thus, pt pt whenever pτ pτ for t . By mathematical induction, pt pt

∈  for all t 1{ ,..., tmax }.

Although Theorem 3.2 guarantees the uniqueness of the SMSE for a specific

value of C, it does not rule out the possibility of multiple symmetric mixed-strategy Nash

equilibria for different values of C.36 Based on extensive numerical results, which are not

36 Finite symmetric games may possess multiple symmetric mixed-strategy Nash equilibria. See example (iv) in Baye et al. (1994). 88

t reported here, showing that the sum of probabilities max p is strictly increasing in the ∑t=1 t

value of C, I conjecture that the SMSE is unique. 37

To find the equilibrium probabilities, the following general result is used.

≥ Suppose that C is an equilibrium expected travel cost of the game. Then, (a) C t,( pt ) C

= > = > 38 for any time t, (b) C t,( pt ) C if pt 0 , and (c) pt 0 if C t,( pt ) C . Since the value of C is unknown, the algorithm must start with an estimate of C.39 For a given

value of C, the associated probabilities p ,..., p are constructed sequentially through 1 tmax

= the following algorithm that starts at t =1 and continues through t tmax .

Step 1 : Set a value of C.

Step 2 : Consider time period t . Given p1 ,..., pt−1 , compute C t )0,( .

≤ = < a. If C C t )0,( , then keep pt 0 . If t tmax , increase t by 1 unit and repeat

Step 2. Otherwise, go to Step 3.

b. IfC > C t )0,( , evaluateC t 1,( − pτ ) , where 1− pτ is the maximum ∑τ

feasible value of pt .

iii. If C ≤ C t 1,( − pτ ) , then, there exists p ( 0< p ≤ 1− pτ ) ∑τ

such that C t,(p ) = C since C t,( p ) is continuous on 1,0[ − pτ ] t t ∑τ

37 The numerical results show that (i) if the value of C is set too low, the sum of the associated probabilities is much smaller than 1, (ii) if the value of C is set too high, then no equilibrium solution exists, and (iii) as the value of C increases the sum of the associated probabilities strictly increases. This implies that there exists an equilibrium expected travel cost such that the associated probabilities sum up to 1. These probabilities are the SMSE.

38 For proof of this result, see sections 3.1.5, 3.4.2, and 3.4.3 in Vorob’ev (1977).

39 The smallest possible value of C is αd if s<1 and α if s≥1. 89

< and strictly increasing in pt by Theorem 3.1. If t tmax , then increase

t by 1 unit and repeat Step 2. Otherwise, go to Step 3.

iv. If C > C t 1,( − pτ ) , then, the game has no solution for the given ∑τ

value of C. Terminate the algorithm. Go to Step 1, decrease C, and

repeat the algorithm.

t Step 3 : If 1− max p > ε , where ε specifies how close is the sum of the probabilities to ∑t=1 t

1, then go to Step 1, increase C, and repeat the algorithm. Otherwise,

p ,..., p are the equilibrium probabilities. 1 tmax

3.4.3 Comparison with Ziegelmeyer et al.

To verify that the proposed algorithm works properly, consider two examples studied by Ziegelmeyer et al. (2008). As mentioned earlier, a special feature that differentiates their model from the model in the present study is how to handle ties. I now show that the procedure successfully constructs the same equilibrium solutions to the two examples in Ziegelmeyer et al. upon a slight modification of the algorithm that computes the travel time. Service capacity per unit time, s, is assumed to be one.

ω = Suppose that t −1 [u v ], and that the designated commuter and j commuters

(out of u) depart from the origin (i.e., arrive at the bottleneck) at time t. Then, the

ω ω designated commuter’s travel time of departure at t, given t −1 and j, T(t | t−1, j) , is given by one of the following four cases: 90

1 if u = ,0 v = 0  v if u = ,0 v > 0 ω =  T(t | t−1 , j)   j +1 if u > ,0 v = 0 v + j if u > ,0 v > 0 ω The designated commuter’s expected travel cost of departure at time t, given t −1 , is given by

u ω = ω C t,( p | t−1) ∑ f (u, j,ht )C(t | t−1, j) , j=0 where

C(t | ω − , j) = αT (t | ω − , j) t 1 t 1 . + β * − + ω + γ + ω − * max{ ,0 t (t T (t | t−1 , j))} max{ (,0 t T (t | t−1 , j)) t }

With the exception of this change, the algorithm is exactly the same as described in

Section 3.4.2.

Table 3.2 presents the SMSE solutions for the cases β=0.25 (top panel) and β=0.5

(bottom panel), where n=4, s=1, α=1, γ=2, t∈{t* -8, …, t* , …, t* +8}, and t* is the desired

arrival time. For both cases β=0.25 and β=0.5, the side-by-side comparison of the solutions generated by my method (O&R* in Table 2) and by Ziegelmeyer et al. (ZKMD in Table 2) shows no discrepancy between the SMSE and the expected travel costs ( C).

This comparison supports the accuracy of the algorithmic procedure. Based on these

computation, when ties are not broken, the expected travel time ( T) is 1.774 for β=0.25

and 2.129 for β=0.5, respectively.

How does the presence of a tie-breaking rule change the departure pattern, the

expected travel cost, and the expected travel time? To answer these questions, for each β

I constructed the SMSE (O&R in Table 2), associated with the expected travel cost and 91 expected travel time by the algorithm described in Section 3.4.2. There are two major differences between the solutions due to Zielgemeyer et al. and O&R. First, for both

β=0.25 and β=0.5, the expected travel cost computed by Ziegelmeyer et al. exceed the

values computed by the O&R’s algorithm. This is due to the fact that no tie-breaking rule

is implemented in their study. Second, the distribution of the equilibrium probabilities

reported in their study is considerably flatter than that of my solution. For each β, the two solutions have almost the same supports, and peak departures (stochastically) occur at the same time, namely, at t* -3. However, for both β=0.25 and β=0.5 the equilibrium

probability assigned to t* -3 is considerably higher in my solution than in Ziegelmeyer et

al. 40

3.5 Comparison with Vickrey’s Continuous Model

This section compares the continuous (Vickrey) and discrete (O&R) bottleneck

models in an environment with a finite number of departure times with respect to several

indices of travel cost and travel time. For this comparison, assume that α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60}, and t* =50. In Section 3.5.1, I only change the service capacity, keeping

the number of commuters fixed at n=10. In Section 3.5.2, I increase the number of commuters, keeping the service capacity constant at s=1. I do not study the effect of

partitioning the strategy space into a small number of time intervals, as do Levinson

(2005) and Zou and Levinson (2006). Rather, in both extensions the number of strategies

(i.e., departure times) is relatively large.

40 Zielgemeyer et al. also reported a second, large-scale experiment with n=16 (rather than n=4 as before), s=4, α=1, β=0.5, γ=2, and t∈{t* -8, …, t* , …, t* +8}, where, as before, t* is the desired arrival time. However, they were unable to compute the symmetric mixed-strategy equilibrium “due to computational problems” that have not been specified. 92

3.5.1 Changing the Service Capacity

Table 3.3 reports the results of the comparison between the two models for seven

bottleneck capacities s ∈{1/4, 1/3, 1/2, 1, 2, 3, 4}. The two models are compared to each other in terms of the travel cost ( C), total travel cost across the n commuters ( TC ), total travel time ( TTT ), total travel time cost ( TTC ), time at which the first commuter departs

home ( tF), and time at which the last commuter departs home ( tL). For each capacity

value s, Vickrey’s model is shown to underestimate travel cost, total travel cost, total

travel time, and total travel time cost in comparison with the O&R’s model. The ratio of

the value of the continuous model divided by the value of the discrete model is displayed

in the third row of each panel (as a percentage ratio). The percentage ratios are relatively

low for large s because travel time cost in Vickrey’s model becomes negligible when the service capacity s becomes large, whereas it remains a major part of travel cost in the discrete model. The percentage ratios increase as the service capacity becomes smaller.

But even at s=1/4 they are still considerably smaller than 1.

Vickrey’s continuous model also underestimates the ratio of total travel time cost

to total travel cost. As shown earlier in Section 3.2, the total travel time cost ( TTC ) in

Vickrey’s model is 50% of the total travel cost ( TC ). This is not the case in the O&R’s

model. In my specific example for n=10, the total travel time cost is 72%, 72%, 65%,

59%, 57%, 57%, and 57% of the total travel cost for s= 4, 3, 2, 1, 1/2, 1/3, and 1/4, respectively.

Figures 3.1a, 3.1b, and 3.1c depict the cumulative relative frequency distributions of departure time of the two models for the cases s=4, s=1, and s=1/4, respectively. For 93 all three cases (and for other cases not reported here), the cumulative relative frequency distribution of departure times under the Vickrey’s model is to the right of the O&R’s

model. This indicates that Vickrey’s continuous model predicts commuters to depart later

than they should depart under the O&R’s discrete model (see also columns 6 and 7 in

Table 3.3).

3.5.2 Changing the Number of Commuters

Next, fix s=1 and systematically increase the number of commuters. The three cost parameters α, β, and γ, and the desired arrival time t* are the same as in Section

3.5.1. Table 3.4 summarizes the numerical results for n ∈{5, 10, 15, 20, 30, 40, 50}. Once again, Vickrey’s continuous model underestimates the values of the four indices C, TC ,

TTT , and TTC . It also predicts commuters to depart later than they should under the

O&R’s model and arrive later to their common destination. Comparison of the percentage

ratios shows that the approximation provided by the deterministic equilibrium solution

slowly increases in n, with these ratios exceeding 95 percent for the indices C and TC and

91 percent for the indices TTT and TTC when n=50 (bottom line of Table 3.4).

It is noteworthy that by constructing a function through the percentage ratios in

Table 3.4, it is possible to extrapolate for the value of n for which the percentage ratio for

each of the indices C and TTT reaches 99%. A cubic spline extrapolation method yields n=119.73 and n=142.44 for the indices C and TTT , respectively.

The effect of change in the value of n is exhibited most clearly in Figure 3.2. This

figure displays the cumulative relative frequency distributions of departure times for the

Vickrey’s and O&R’s models for n=30 (Figure 3.2a) and n=50 (Figure 3.2b). Except for 94

the bias to depart later (that may exceed 7 percent of the population at tF), the Vickrey’s continuous model approximates the O&R’s discrete model rather accurately.

3.6 Extensions

This section describes two extensions of the discrete bottleneck model. Both are

proposed in order to narrow the gap between theory and practice in traffic networks

subject to congestion. Importantly, both are based on variants of the same algorithm

described in Section 3.4.2. The first extension maintains the assumption that the number

of commuters choosing the congestible road is exogenously determined but replaces the

assumption of a fixed n by a random n with commonly known distribution. 41 By allowing the choice of an alternative transportation mode that is not subject to congestion (e.g., train), the second extension allows for the number of commuters choosing the congestible road to be endogenously determined.

3.6.1 Random Number of Commuters

Throughout the current study, the bottleneck model of O&R was studied under the assumption that the number of commuters, n, is fixed and commonly known.

However, under most general circumstances the exact value of n may not be known with

precision. I propose to capture this uncertainty with the assumption that n is a random

variable whose distribution is commonly known. As in the original model of O&R, the

number of commuters who choose departure times on the congestible road is exogenously

determined.

41 The literature on transportation has recognized the practical importance of uncertainty. For example, Arnott et al. (1991, 1999) study stochastic environments with respect to road capacity and demand.

95

When considering games with an uncertain number of commuters, caution should be exercised in distinguishing between the probability distribution of the number of commuters perceived by an outside observer and the probability distribution perceived by a commuter who participates in the game. 42 To illustrate this distinction, consider a discrete bottleneck game in which the number of commuters is either 8 or 12 with equal probability. An observer looking at the game from the outside would conclude that the expected number of commuters is 10. A commuter who actually participates in the game would conclude that she is 1.5 times as likely to interact with 11 than with 7 other commuters. She then updates the conditional probability of interacting with 7 other commuters to 2/5 and the conditional probability of interacting with 11 other commuters to 3/5. From her perspective, the expected number of other commuters she plays with is 7

× 2/5 + 11 × 3/5 = 9.4. Therefore, including herself, she expects 10.4 (rather than 10) commuters to be in the game.

A major advantage of the present computational approach is that the modifications of the algorithm that take care of this distinction are minimal; one only needs to modify the initial vector and transition matrix. Recall that the elements of the initial vector specify probabilities over possible states at time 0. When the number of commuters, n, is fixed and known, the value of 1 is assigned to the state in which all the n-1 commuters are at the origin. All the other elements in the initial vector take the value

= of 0. Suppose now that n is either nL with probability Pr( n nL ) or nH with probability

= < = + = = Pr( n nH ) , where nL nH , Pr( n nL ) Pr( n nH ) 1 , and that this distribution is

42 For the importance of this distinction, see Cooper (1981) and Myerson (1998). 96

Ωˆ ω ∈Ωˆ common knowledge. Denote by a set of possible states and by ˆt a state at time t.

Then, the number of possible states Ωˆ is computed from

+ − nH (nH )1 (nH )1 nH  + (d − )1 if s < 1 Ωˆ =  2 2 . n (n + )1  H H if s ≥ 1  2

Denote by Pˆ )0( a 1× Ωˆ initial vector whose elements are probabilities over possible states at time 0. To construct this vector, one must compute probabilities of the

− following two possible states: (i) all the nL 1 other commuters are at the origin, and (ii)

− = all the nH 1 other commuters are at the origin. Denote by Pr( n nL | In ) the

= conditional probability of n nL , given that a commuter is one of the participants in the

game. Then,

Pr( n = n ∩ In ) Pr( n = n | In ) = L L Pr( In ) Pr( n = n ∩ In ) = L = ∩ + = ∩ Pr( n nL In ) Pr( n nH In ) Pr( n = n ) Pr( In | n = n ) = L L . = = + = = Pr( n nL ) Pr( In | n nL ) Pr( n nH ) Pr( In | n nH )

Assuming that each commuter is equally likely to participate in the game, a commuter is

= = (nH / nL ) times as likely to be in the game if n nH than if n nL . In other words,

= = = Pr( In | n nH ) (nH / nL ) Pr( In | n n L ) . Then,

Pr( n = n )Pr( In | n = n ) Pr( n = n | In ) = L L L Pr( n = n ) Pr( In | n = n ) + (n / n ) Pr( n = n ) Pr( In | n = n ) L L H L H L Pr( n = n ) = L . = + = Pr( n nL ) (nH / nL ) Pr( n nH ) 97

= Similarly, the conditional probability of n nH , given that a commuter is one of the

= participants in the game, Pr( n nH | In ) , is computed from

Pr( n = n ) Pr( n = n | In ) = H . H = + = Pr( n nH ) (nL / nH ) Pr( n nL )

ˆ = − In the initial vector P )0( , Pr( n nL | In ) is assigned to the state in which all the nL 1

= commuters are at the origin, while Pr( n nH | In ) is assigned to the state in which all the

− Ωˆ × Ωˆ nH 1 commuters are at the origin. A transition matrix whose elements are probabilities of transition from one state to another must be defined. Then, the same algorithm described in Section 3.4.2 applies with these new initial vector and transition matrix.

Table 3.5 exhibits the SMSE for the case n=10 and for the case where n is either 8 with probability 0.6 or 12 with probability 0.4. Each commuter in both cases expects the same number of other commuters; any commuter in the former case knows that there are

9 other commuters whereas for any commuter who participates in the latter case the expected number of other commuters is 7 × Pr( n = |8 In ) +11 × Pr( n = 12 | In ) = 9 . For both cases, s=1, α=1, β=0.6, γ=2.4, t∈{1, …, 60}, and t* =50. Comparison of the two

cumulative probability distributions of departure time in Table 3.5 shows that although

each commuter in these two cases shares the same expectation about the number of other

commuters, the uncertainty about the number of commuters induces earlier departure

times and results in a higher travel cost.

3.6.2 Augmented Strategy Space 98

The previous sections have considered the case where all the n commuters choose their time of departure on the same congestible road. Consequently, in the original model of O&R and in its extension in Section 3.6.1, commuters are not given the opportunity to choose an alternative mode of transportation (such as a commuter train) that is not subject to congestion. The present section relaxes this assumption by incorporating the option of an alternative transportation mode into the strategy space. To do so, suppose that each commuter has to choose a departure time from the augmented strategy space,

∪ 1{ ,..., tmax } {alt }, in which alt denotes the decision to use an alternative transportation

43 mode. Let palt and Calt denote the associated probability and cost, respectively.

Commuters deciding to travel on the congestible road choose their departure times

without knowledge of the group size. Therefore, in contrast to the model in Section 3.4,

the number of commuters who choose traveling on the congestible road is endogenously

determined.

The algorithm that computes the SMSE with the augmented strategy space is

similar to the one developed in Section 3.4.2. Denote by C (i) the expected travel cost

when i commuters (1 ≤i≤n) use the congestible road. 44 First, assuming that all the n

commuters use the road, i.e., assuming that p = 0 , derive the SMSE p ,..., p and alt 1 tmax

compute the associated expected travel cost, C (n) . Then, one of the following three cases takes place.

43 The discussion here assumes that Calt is a fixed value. Calt can also be assumed to be a function of the number of commuters who choose an alternative transportation mode.

44 If only one commuter uses the road, this commuter can achieve the smallest total travel cost. Thus, C(1) = αd if s<1 and C(1) = α if s≥1. 99

(n) < a. If C Calt then none of the n commuters is willing to use the alternative

transportation mode. Keep p = 0 . Then, the probabilities p , p ..., p are alt alt 1 tmax

the SMSE and the equilibrium expected travel cost is C (n) .

)1( ≤ ≤ (n) b. If C Calt C , then using the congestible road is no longer a dominant

strategy. Thus, each of the n commuters stochastically chooses the alternative

> transportation mode, i.e., palt 0 . Given Calt , derive the associated

t probabilities p ,..., p and then calculate p = 1− max p . The probabilities 1 tmax alt ∑t=1 t

p , p ..., p constitute the unique SMSE with the equilibrium cost C . alt 1 tmax alt

< )1( c. If Calt C , then it is a dominant strategy to use the alternative transportation

mode. Therefore, the probabilities p = 1 and p = p = ... = p = 0 alt 1 2 tmax

constitute the unique (degenerate) SMSE with the equilibrium cost Calt .

Table 3.6 presents the SMSE for three cases of different costs of using the alternative transportation mode. As before, s=1, α=1, β=0.6, γ=2.4, t∈{1, …, 60}, and

= t* =50. When Calt 7 , it is too expensive for any commuter to use the alternative

= transportation mode. Thus, all the commuters use the congestible road (i.e., palt 0 ) ,

which results in the travel cost of 5.858 (see case s=1 in Table 3.3). As the value of Calt

decreases, each commuter increases her probability of choosing the alternative

transportation mode (i.e., palt = 0.235 and 0.681 for Calt = 5 and 3, respectively). Table 3.6

further shows that as Calt decreases, commuters deciding to travel on the congestible road choose later departure times. 100

A close inspection of Tables 3.5 and 3.6, as well as Figures 3.1b, 3.1c, 3.2a, and

3.2b, shows oscillatory patterns in the equilibrium probabilities. These are not unique to the present model (see, e.g., Rapoport et al., 2004 and Seale et al., 2005 for similar oscillatory patterns in single-server queues with finite populations and endogenous arrival times).

3.7 Conclusion

It is common practice in economics, transportation science, and related disciplines to use continuous models in analyzing behavior that is essentially discrete. The strategy space, number of agents, or both, often are assumed to be continuous in order to gain analytical tractability, where in fact they are discrete. When a continuum of agents is assumed, one may derive closed-form solutions that, in turn, allow for the study of comparative statics. When this assumption is dropped, and the congestion model assumes any finite number of commuters, departure times can be computed exactly and approximations by the continuous model are no longer required. However, there is a trade-off: numerical and sometimes brute-force computations are substituted for the elegance and simplicity of the closed-form solutions. The numerical computations in this chapter indicate the presence of systematic errors, which for a small n are substantial, when models assuming a continuum of commuters are used to account for departure times in traffic networks in which the number of participants is relatively small. 45 They further suggest that as the population size grows the difference between the continuous and discrete models of traffic congestion diminishes and can safely be ignored.

45 See, e.g, the road pricing experiments reported by Schneider and Weimann (2004) that were designed to test the continuous bottleneck congestion model of Arnott et al. (1990, 1993). 101

No explanation has been offered for why the equilibrium solution presented in

this chapter yields appreciably higher expected travel costs ( C) than the deterministic

equilibrium of Arnott et al. My model differs from theirs by having discrete time periods

and, more importantly, a finite number of commuters. It yields a mixed-strategy

equilibrium where a commuter may not wish to deviate if she knows the strategies of the

other n-1 commuters but may benefit from a change in departure time if she knows their

actual choices. This is not the case when the equilibrium is deterministic. Under the

stochastic equilibrium, two or more commuters may fail to coordinate their departure

times whereas under the deterministic equilibrium they may not. This suggests that lack

of coordination in departure time decisions is one reason for the higher travel costs. The

present results are consistent with the results of Rapoport et al. (in press) who reported

higher travel costs under mixed-strategy than pure-strategy equilibria in a study of route

choice (rather than departure time) in traffic networks. A second reason for the higher

travel costs is that under the assumption of a finite number of commuters, each commuter

must spend a finite length of time passing through the bottleneck. This would increase

her travel time and thereby her expected travel cost. In contrast, under Vickrey’s

formulation she passes through the bottleneck instantaneously once all the commuters

who preceded her in a queue clear.

The SMSE solution is the natural choice under the assumption of identical commuters. But, clearly, commuters are in general not identical. They differ from one another, among other dimensions, in official work hours (Wilson, 1988), work flexibility

(Emmerink and van Beek, 1995), unit schedule delay costs (Small, 1982), and cost of 102 waiting in the queue. One would expect heterogeneity of the commuters to be conducive towards the existence of asymmetric pure-strategy equilibria due to self-selection of departure times by different segments of the population (e.g., Daniel, 2001; Xin and

Levinson, 2007). Therefore, it is important to extend the theoretical analysis of endogenous departure times to encompass the case of heterogeneous commuters.

There are several directions that could be pursued in the future. The first direction is to experimentally test the discrete bottleneck model with a large number of commuters.

In this chapter, a numerical algorithm has been constructed that overcomes computational problems which restricted the previous research to models with a small number of commuters. This direction will complement the contributions of Ziegelmeyer et al.

(2008) by experimentally investigating the impact of a larger size of the population on the commuters’ ability of a tacit coordination in a decentralized environment. The second direction is to extend the discrete model by incorporating uncertainty about the service capacity of a bottleneck. In the current study, I have assumed that the service capacity is fixed and commonly known. However, this assumption is restrictive because service facilities that result in a bottleneck are prone to congestion due to external random factors such as bad weather and accident. This direction is as important as the first extension of

Section 3.6. The third direction is another extension of the present model by introducing

“business hours” during which a bottleneck opens. The model in this chapter assumes that service facilities causing a bottleneck always open. Commuters may alter their departure patterns by responding to changes in business hours.

103

APPENDIX A: FIGURES

LIST OF FIGURES

FIGURE 1.1 Equilibrium probability distributions of termination time (upper panel) and their cumulative probability distributions (lower panel) when n=3 and T=30...... 105

FIGURE 1.2 Observed cumulative relative frequency distributions of termination time for Conditions δ=0.3 and δ=0.6 ...... 106

FIGURE 1.3 Mean termination time across the three groups over 50 rounds (left column: Condition δ=0.3, right column: Condition δ=0.6) ...... 107

FIGURE 1.4 Predicted and observed relative frequency distributions of termination time (at t=2 or later) in Condition δ=0.3...... 108

FIGURE 1.5 Predicted and observed relative frequency distributions of termination time (at t=2 or later) in Condition δ=0.6...... 109

FIGURE 1.6 Frequency distributions of individual number of stopping decisions (upper panel: Condition δ=0.3, lower panel: Condition δ=0.6) ...... 110

FIGURE 1.7 Cumulative relative frequency distributions of individual number of stopping decisions...... 111

FIGURE 1.8 Individual payoff as a function of individual number of stopping decisions (upper panel: Condition δ=0.3, lower panel: Condition δ=0.6)...... 112

FIGURE 2.1 Symmetric mixed-strategy Nash equilibrium solutions for the distributions of bids with n=50 and B={1,...,50} (upper panel: LUBA; lower panel: HUBA)...... 113

FIGURE 2.2 Predicted probabilities and observed relative frequency distributions of bids on the aggregate level for Condition LUBA ...... 114

FIGURE 2.3 Predicted probabilities and observed relative frequency distributions of bids on the aggregate level for Condition HUBA...... 115

FIGURE 2.4 Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 1 to 30 of Condition LUBA ...... 116

FIGURE 2.5 Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 31 to 60 of Condition LUBA ...... 118 104

LIST OF FIGURES - Continued

FIGURE 2.6 Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 1 to 30 of Condition HUBA...... 120

FIGURE 2.7 Observed relative frequency of (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 31 to 60 of Condition HUBA...... 122

FIGURE 2.8 Predicted probability distributions of the number of switching bids for Conditions LUBA (upper panel) and HUBA (lower panel) ...... 124

FIGURE 2.9 Bids (y-axis) by round (x-axis) of each of the fifty subjects of Condition LUBA...... 125

FIGURE 2.10 Bids (y-axis) by round (x-axis) of each of the fifty subjects of Condition HUBA ...... 127

FIGURE 3.1 Cumulative Relative Frequency Distributions of Departure Times for the Vickrey’s and O&R’s bottleneck models for (3.1a) s=4, (3.1b) s=1, and (3.1c) s=1/4 ...... 129

FIGURE 3.2 Cumulative Relative Frequency Distributions of Departure Times for the Vickrey’s and O&R’s models for (3.2a) n=30 and (3.2b) n=50...... 131

105

FIGURE 1.1: Equilibrium probability distributions of termination time (upper panel) and their cumulative probability distributions (lower panel) when n=3 and T=30.

0.2

0.18

0.16

0.14

0.12 0.3 0.1 0.6

Probability 0.08

0.06

0.04

0.02

0 0 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930 NV Time Period

1

0.9

0.8

0.7

0.6 0.3 0.5 0.6 0.4

0.3 Cumulative Probability Cumulative

0.2

0.1

0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NV Time Period 106

FIGURE 1.2: Observed cumulative relative frequency distributions of termination time for Conditions δ=0.3 and δ=0.6.

1

0.9

0.8

0.7

0.6 0.3 0.5 0.6 0.4

0.3

Cumulative Relative Frequency Relative Cumulative 0.2

0.1

0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period 107

FIGURE 1.3: Mean termination time across the three groups over 50 rounds (left column: Condition δ=0.3, right column: Condition δ=0.6)

NS Session 1 NS Session 1

30 30 27 27 24 24 21 21 18 18 15 15 12 12 9 9 Time Period Time 6 Time Period 6 3 3 0 0 1 4 7 1013161922252831343740434649 1 4 7 1013161922252831343740434649 Round Round

NS Session 2 NS Session 2

30 30 27 27 24 24 21 21 18 18 15 15 12 12 9 9 Time Period Time 6 Time Period 6 3 3 0 0 1 4 7 1013161922252831343740434649 1 4 7 1013161922252831343740434649 Round Round

NS Session 3 NS Session 3

30 30 27 27 24 24 21 21 18 18 15 15 12 12 9 9 Time Period Time 6 Time Period 6 3 3 0 0 1 4 7 1013161922252831343740434649 1 4 7 1013161922252831343740434649 Round Round

NS Session 4 NS Session 4

30 30 27 27 24 24 21 21 18 18 15 15 12 12 9 9 Time Period Time 6 Time Period 6 3 3 0 0 1 4 7 1013161922252831343740434649 1 4 7 1013161922252831343740434649 Round Round

NS Session 5 NS Session 5

30 30 27 27 24 24 21 21 18 18 15 15 12 12 9 9 Time Period Time 6 Time Period 6 3 3 0 0 1 4 7 1013161922252831343740434649 1 4 7 1013161922252831343740434649 Round Round 108

FIGURE 1.4: Predicted and observed relative frequency distributions of termination time (at t=2 or later) in Condition δ=0.3 Across sessions Session 1 0.2 0.2 0.18 Predicted 0.18 Predicted 0.16 Observed 0.16 Observed 0.14 0.14 0.12 0.12 0.1 0.1 0.08 0.08 0.06 0.06

Relative Frequency 0.04 Relative Frequency 0.04 0.02 0.02 0 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period Time Period

Session 2 Session 3 0.2 0.2 0.18 Predicted 0.18 Predicted 0.16 Observed 0.16 Observed 0.14 0.14 0.12 0.12 0.1 0.1 0.08 0.08 0.06 0.06

Relative Frequency 0.04 Relative Frequency 0.04 0.02 0.02 0 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period Time Period

Session 4 Session 5 0.2 0.2 0.18 Predicted 0.18 Predicted 0.16 Observed 0.16 Observed 0.14 0.14 0.12 0.12 0.1 0.1 0.08 0.08 0.06 0.06

Relative Frequency 0.04 Relative Frequency 0.04 0.02 0.02 0 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period Time Period

109

FIGURE 1.5: Predicted and observed relative frequency distributions of termination time (at t=2 or later) in Condition δ=0.6 Across sessions Session 1 0.45 0.45 0.4 Predicted 0.4 Predicted 0.35 Observed 0.35 Observed 0.3 0.3 0.25 0.25 0.2 0.2 0.15 0.15

Relative Frequency 0.1 Relative Frequency 0.1 0.05 0.05 0 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period Time Period

Session 2 Session 3 0.45 0.45 0.4 Predicted 0.4 Predicted 0.35 Observed 0.35 Observed 0.3 0.3 0.25 0.25 0.2 0.2 0.15 0.15

Relative Frequency 0.1 Relative Frequency 0.1 0.05 0.05 0 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period Time Period

Session 4 Session 5 0.45 0.45 0.4 Predicted 0.4 Predicted 0.35 Observed 0.35 Observed 0.3 0.3 0.25 0.25 0.2 0.2 0.15 0.15

Relative Frequency 0.1 Relative Frequency 0.1 0.05 0.05 0 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 NS Time Period Time Period

110

FIGURE 1.6: Frequency distributions of individual number of stopping decisions (upper panel: Condition δ=0.3, lower panel: Condition δ=0.6)

6 The central 99% interval = [7,25]

5

4

3 Frequency 2

1

0 0 2 4 6 8 101214161820222426283032343638404244464850 Number of Stopping Decisions

5 The central 99% interval = [9,27]

4

3

Frequency 2

1

0 0 2 4 6 8 101214161820222426283032343638404244464850 Number of Stopping Decisions

111

FIGURE 1.7: Cumulative relative frequency distributions of individual number of stopping decisions

1

0.9 Condition δ =0.3 0.8

0.7

0.6

0.5

0.4

0.3

CumulativeRelativeFrequency 0.2 Condition δ =0.6 0.1

0 0 2 4 6 8 101214161820222426283032343638404244464850 Number of Stopping Decisions

112

FIGURE 1.8: Individual payoff as a function of individual number of stopping decisions (upper panel: Condition δ=0.3, lower panel: Condition δ=0.6)

650

600

550

500

450

400 Individual Payoff (in points) (in Payoff Individual

350

300 0 5 10 15 20 25 30 35 40 45 50 Number of Stopping Decisions

850

800

750

700

650 Individual Payoff (in points) (in Payoff Individual 600

550 0 10 20 30 40 50 Number of Stopping Decisions

113

FIGURE 2.1: Symmetric mixed-strategy Nash equilibrium solutions for the distributions of bids with n=50 and B={1,...,50} (upper panel: LUBA; lower panel: HUBA)

0.09 LUBA 0.08 n = 50 B = {1, 2,..., 50} 0.07

0.06

0.05

0.04 Probability 0.03

0.02

0.01

0 1 3 5 7 9 1113151719212325272931333537394143454749 Bid

0.09 HUBA 0.08 n = 50 B = {1, 2,..., 50} 0.07

0.06

0.05

0.04 Probability 0.03

0.02

0.01

0 1 3 5 7 9 1113151719212325272931333537394143454749 Bid 114

FIGURE 2.2: Predicted probabilities and observed relative frequency distributions of bids on the aggregate level for Condition LUBA

Rounds 1 to 30 0.250 Predicted Observed 0.200

0.150

0.100 RelativeFrequency

0.050

0.000 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425 Bid

Rounds 31 to 60 0.250 Predicted Observed 0.200

0.150

0.100 RelativeFrequency

0.050

0.000 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425 Bid 115

FIGURE 2.3: Predicted probabilities and observed relative frequency distributions of bids on the aggregate level for Condition HUBA

Rounds 1 to 30 0.250 Predicted Observed 0.200

0.150

0.100 RelativeFrequency

0.050

0.000 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425 Bid

Rounds 31 to 60 0.250 Predicted Observed 0.200

0.150

0.100 RelativeFrequency

0.050

0.000 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425 Bid 116

FIGURE 2.4: Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 1 to 30 of Condition LUBA

Session 1 Subject 1 Session 1 Subject 2 Session 1 Subject 3 Session 1 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 5 Session 1 Subject 6 Session 1 Subject 7 Session 1 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 9 Session 1 Subject 10 Session 2 Subject 1 Session 2 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 3 Session 2 Subject 4 Session 2 Subject 5 Session 2 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 7 Session 2 Subject 8 Session 2 Subject 9 Session 2 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 1 Session 3 Subject 2 Session 3 Subject 3 Session 3 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 5 Session 3 Subject 6 Session 3 Subject 7 Session 3 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

117

FIGURE 2.4 (Continued): Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 1 to 30 of Condition LUBA

Session 3 Subject 9 Session 3 Subject 10 Session 4 Subject 1 Session 4 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 3 Session 4 Subject 4 Session 4 Subject 5 Session 4 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 7 Session 4 Subject 8 Session 4 Subject 9 Session 4 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 1 Session 5 Subject 2 Session 5 Subject 3 Session 5 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 5 Session 5 Subject 6 Session 5 Subject 7 Session 5 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 9 Session 5 Subject 10 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

118

FIGURE 2.5: Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 31 to 60 of Condition LUBA

Session 1 Subject 1 Session 1 Subject 2 Session 1 Subject 3 Session 1 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 5 Session 1 Subject 6 Session 1 Subject 7 Session 1 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 9 Session 1 Subject 10 Session 2 Subject 1 Session 2 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 3 Session 2 Subject 4 Session 2 Subject 5 Session 2 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 7 Session 2 Subject 8 Session 2 Subject 9 Session 2 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 1 Session 3 Subject 2 Session 3 Subject 3 Session 3 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 5 Session 3 Subject 6 Session 3 Subject 7 Session 3 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

119

FIGURE 2.5 (Continued): Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 31 to 60 of Condition LUBA

Session 3 Subject 9 Session 3 Subject 10 Session 4 Subject 1 Session 4 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 3 Session 4 Subject 4 Session 4 Subject 5 Session 4 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 7 Session 4 Subject 8 Session 4 Subject 9 Session 4 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 1 Session 5 Subject 2 Session 5 Subject 3 Session 5 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 5 Session 5 Subject 6 Session 5 Subject 7 Session 5 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 9 Session 5 Subject 10 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

120

FIGURE 2.6: Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 1 to 30 of Condition HUBA

Session 1 Subject 1 Session 1 Subject 2 Session 1 Subject 3 Session 1 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 5 Session 1 Subject 6 Session 1 Subject 7 Session 1 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 9 Session 1 Subject 10 Session 2 Subject 1 Session 2 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 3 Session 2 Subject 4 Session 2 Subject 5 Session 2 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 7 Session 2 Subject 8 Session 2 Subject 9 Session 2 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 1 Session 3 Subject 2 Session 3 Subject 3 Session 3 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 5 Session 3 Subject 6 Session 3 Subject 7 Session 3 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

121

FIGURE 2.6 (Continued): Observed elative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 1 to 30 of Condition HUBA

Session 3 Subject 9 Session 3 Subject 10 Session 4 Subject 1 Session 4 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 3 Session 4 Subject 4 Session 4 Subject 5 Session 4 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 7 Session 4 Subject 8 Session 4 Subject 9 Session 4 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 1 Session 5 Subject 2 Session 5 Subject 3 Session 5 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 5 Session 5 Subject 6 Session 5 Subject 7 Session 5 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 9 Session 5 Subject 10 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

122

FIGURE 2.7: Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 31 to 60 of Condition HUBA

Session 1 Subject 1 Session 1 Subject 2 Session 1 Subject 3 Session 1 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 5 Session 1 Subject 6 Session 1 Subject 7 Session 1 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 1 Subject 9 Session 1 Subject 10 Session 2 Subject 1 Session 2 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 3 Session 2 Subject 4 Session 2 Subject 5 Session 2 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 2 Subject 7 Session 2 Subject 8 Session 2 Subject 9 Session 2 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 1 Session 3 Subject 2 Session 3 Subject 3 Session 3 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 3 Subject 5 Session 3 Subject 6 Session 3 Subject 7 Session 3 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

123

FIGURE 2.7 (Continued): Observed relative frequency (y-axis) of bids (x-axis) of each of the fifty subjects in rounds 31 to 60 of Condition HUBA

Session 3 Subject 9 Session 3 Subject 10 Session 4 Subject 1 Session 4 Subject 2 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 3 Session 4 Subject 4 Session 4 Subject 5 Session 4 Subject 6 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 4 Subject 7 Session 4 Subject 8 Session 4 Subject 9 Session 4 Subject 10 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 1 Session 5 Subject 2 Session 5 Subject 3 Session 5 Subject 4 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 5 Session 5 Subject 6 Session 5 Subject 7 Session 5 Subject 8 1 1 1 1 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.7 0.7 0.7 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25 Session 5 Subject 9 Session 5 Subject 10 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 1 4 7 10 13 16 19 22 25 1 4 7 10 13 16 19 22 25

124

FIGURE 2.8: Predicted probability distributions of the number of switching bids for Conditions LUBA (upper panel) and HUBA (lower panel)

LUBA

0.25

0.2

0.15

0.1 Probability

0.05

0 0 1 2 3 4 5 6 7 8 9 1011121314151617181920212223242526272829 Number of Switches

HUBA

0.25

0.2

0.15

Probability 0.1

0.05

0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Number of Switches 125

FIGURE 2.9: Bids (y-axis) by round (x-axis) of each of the fifty subjects of Condition LUBA

Session 1 Subject 1 Session 1 Subject 2 Session 1 Subject 3 Session 1 Subject 4 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 1 Subject 5 Session 1 Subject 6 Session 1 Subject 7 Session 1 Subject 8 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 1 Subject 9 Session 1 Subject 10 Session 2 Subject 1 Session 2 Subject 2 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 2 Subject 3 Session 2 Subject 4 Session 2 Subject 5 Session 2 Subject 6 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 2 Subject 7 Session 2 Subject 8 Session 2 Subject 9 Session 2 Subject 10 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 3 Subject 1 Session 3 Subject 2 Session 3 Subject 3 Session 3 Subject 4 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 3 Subject 5 Session 3 Subject 6 Session 3 Subject 7 Session 3 Subject 8 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55

126

FIGURE 2.9 (Continued): Bids (y-axis) by round (x-axis) of each of the fifty subjects of Condition LUBA

Session 3 Subject 9 Session 3 Subject 10 Session 4 Subject 1 Session 4 Subject 2 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 4 Subject 3 Session 4 Subject 4 Session 4 Subject 5 Session 4 Subject 6 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 4 Subject 7 Session 4 Subject 8 Session 4 Subject 9 Session 4 Subject 10 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 5 Subject 1 Session 5 Subject 2 Session 5 Subject 3 Session 5 Subject 4 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 5 Subject 5 Session 5 Subject 6 Session 5 Subject 7 Session 5 Subject 8 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 5 Subject 9 Session 5 Subject 10 25 25 20 20 15 15 10 10 5 5 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 127

FIGURE 2.10: Bids (y-axis) by round (x-axis) of each of the fifty subjects of Condition HUBA

Session 1 Subject 1 Session 1 Subject 2 Session 1 Subject 3 Session 1 Subject 4 25 25 25 25 24 20 20 20 23 15 15 15 22 10 10 10 21 20 5 5 5 19 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 1 Subject 5 Session 1 Subject 6 Session 1 Subject 7 Session 1 Subject 8 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 1 Subject 9 Session 1 Subject 10 Session 2 Subject 1 Session 2 Subject 2 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 2 Subject 3 Session 2 Subject 4 Session 2 Subject 5 Session 2 Subject 6 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 2 Subject 7 Session 2 Subject 8 Session 2 Subject 9 Session 2 Subject 10 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 3 Subject 1 Session 3 Subject 2 Session 3 Subject 3 Session 3 Subject 4 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 3 Subject 5 Session 3 Subject 6 Session 3 Subject 7 Session 3 Subject 8 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55

128

FIGURE 2.10 (Continued): Bids (y-axis) by round (x-axis) of each of the fifty subjects of Condition HUBA

Session 3 Subject 9 Session 3 Subject 10 Session 4 Subject 1 Session 4 Subject 2 25 25 25 25 20 20 20 24 15 15 15 23 10 10 10 22 5 5 5 21 0 0 0 20 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 4 Subject 3 Session 4 Subject 4 Session 4 Subject 5 Session 4 Subject 6 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 4 Subject 7 Session 4 Subject 8 Session 4 Subject 9 Session 4 Subject 10 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 5 Subject 1 Session 5 Subject 2 Session 5 Subject 3 Session 5 Subject 4 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 5 Subject 5 Session 5 Subject 6 Session 5 Subject 7 Session 5 Subject 8 25 25 25 25 20 20 20 20 15 15 15 15 10 10 10 10 5 5 5 5 0 0 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55 Session 5 Subject 9 Session 5 Subject 10 25 25 20 20 15 15 10 10 5 5 0 0 1 7 13 19 25 31 37 43 49 55 1 7 13 19 25 31 37 43 49 55

129

FIGURE 3.1: Cumulative Relative Frequency Distributions of Departure Times for the Vickrey’s and O&R’s bottleneck models for (3.1a) s=4, (3.1b) s=1, and (3.1c) s=1/4.

s =4

1

0.9

0.8

0.7

0.6

0.5

0.4 Vickrey's model (solid line) 0.3

0.2 Cumulative Relative Frequency Relative Cumulative 0.1

0 1 3 5 7 9 1113151719212325272931333537394143454749515355 57 59 Time t* =50

s =1

1

0.9 Vickrey's model (solid line) 0.8

0.7

0.6

0.5

0.4

0.3

0.2 Cumulative Relative Frequency Relative Cumulative 0.1

0 1 3 5 7 9 1113151719212325272931333537394143454749515355 57 59 Time t* =50

130

FIGURE 3.1(Continued): Cumulative Relative Frequency Distributions of Departure Times for the Vickrey’s and O&R’s bottleneck models for (3.1a) s=4, (3.1b) s=1, and (3.1c) s=1/4.

s =1/4

1

0.9 Vickrey's model (solid line) 0.8

0.7

0.6

0.5

0.4

0.3

0.2 Cumulative Relative Frequency Relative Cumulative 0.1

0 1 3 5 7 9 1113151719212325272931333537394143454749515355 57 59 Time t* =50

131

FIGURE 3.2: Cumulative Relative Frequency Distributions of Departure Times for the Vickrey’s and O&R’s models for (3.2a) n=30 and (3.2b) n=50

n =30

1 Vickrey's model (solid line) 0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2 Cumulative Relative Frequency Relative Cumulative 0.1

0 1 3 5 7 9 1113151719212325272931333537394143454749515355 57 59 Time t* =50

n =50

1 Vickrey's model (solid line) 0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2 Cumulative Relative Frequency Relative Cumulative 0.1

0 1 3 5 7 9 1113151719212325272931333537394143454749515355 57 59 Time t* =50 132

APPENDIX B: TABLES

LIST OF TABLES

TABLE 1.1 Payoff matrix of the subgame starting at time period t = T ...... 134

TABLE 1.2 Payoff tables for the experiment by condition...... 135

TABLE 1.3 Observed frequency distributions of termination time by session for Conditions δ=0.3 and δ=0.6...... 136

TABLE 2.1 Predicted probabilities and observed relative frequencies of bids on the group and aggregate levels for Condition LUBA...... 137

TABLE 2.2 Predicted probabilities and observed relative frequencies of bids on the group and aggregate levels for Condition HUBA...... 138

TABLE 2.3 Results of the Kolmogorov-Smirnov test for Condition LUBA (“R”: Reject the null hypothesis of SMSE play; “FR”: Fail to reject the null hypothesis) ...... 139

TABLE 2.4 Results of the Kolmogorov-Smirnov test for Condition HUBA (“R”: Reject the null hypothesis of SMSE play; “FR”: Fail to reject the null hypothesis) ...... 140

TABLE 2.5 Number of switching bids for Condition LUBA (at most 29 opportunities of switching for each of the first and last 30 rounds) ...... 141

TABLE 2.6 Number of switching bids for Condition HUBA (at most 29 opportunities of switching for each of the first and last 30 rounds) ...... 142

TABLE 2.7 Four categories of subjects for Condition LUBA...... 143

TABLE 2.8 Four categories of subjects for Condition HUBA...... 144

TABLE 2.9 Results from a fixed effects regression for each of the five sessions by condition ...... 145

TABLE 3.1 Examples of a discrete bottleneck game with parameters n=10, α=1, β=0.6, γ=2.4, t ∈ {1, 2, …, 60} and t* =50: s=1/3 (top panel) and s=3 (bottom panel)...... 146

133

LIST OF TABLES - Continued

TABLE 3.2 Symmetric mixed-strategy equilibrium solutions for the cases n=4, s=1, α=1, β ∈ {0.25, 0.5}, γ=2, and t ∈ {t* -8, …, t* , …, t* +8} (NP = Not Provided)...... 147

TABLE 3.3 Comparison of the Vickrey’s and O&R’s bottleneck models with parameters n=10, α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60}, t* =50, and s∈{1/4, 1/3, 1/2, 1, 2, 3, 4} ...... 148

TABLE 3.4 Comparison of the Vickrey’s and O&R’s bottleneck models with parameters s=1, α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60}, t* =50, and n∈{5, 10, 15, 20, 30, 40, 50}...... 149

TABLE 3.5 Symmetric mixed-strategy equilibrium solutions for the case where n=10 (columns 2 and 3) and the case where n is either 8 with probability 0.6 or 12 with probability 0.4 (columns 4 and 5) with parameters s=1, α=1, β=0.6, γ=2.4, t∈{1, …, 60}, and t* =50 ...... 150

TABLE 3.6 Symmetric mixed-strategy equilibrium solutions for three different costs of choosing an alternative transportation mode not subject to congestion with n=10, s=1, α=1, β=0.6, γ=2.4, t∈{1, …, 60}, and t* =50 ...... 151

134

TABLE 1.1: Payoff matrix of the subgame starting at time period t = T

Number of the other players who choose to volunteer 0 1 … n-1

Volunteer LT LT … LT

Don’t volunteer ε H T … H T

135

TABLE 1.2: Payoff tables for the experiment by condition

Payoffs Payoffs Time Time Volunteer Volunteer Non- Volunteer Volunteer Non- Period Period (δ=0.3) (δ=0.6) volunteer (δ=0.3) (δ=0.6) volunteer 0 6.99 12.97 20.95 16 2.20 3.39 4.99 1 6.41 11.83 19.05 17 2.08 3.16 4.60 2 5.90 10.80 17.33 18 1.98 2.95 4.26 3 5.43 9.86 15.77 19 1.88 2.77 3.94 4 5.01 9.01 14.36 20 1.80 2.59 3.66 5 4.62 8.25 13.08 21 1.72 2.44 3.40 6 4.28 7.56 11.93 22 1.65 2.30 3.17 7 3.96 6.93 10.88 23 1.59 2.17 2.96 8 3.68 6.36 9.94 24 1.53 2.06 2.76 9 3.42 5.85 9.08 25 1.48 1.96 2.59 10 3.19 5.38 8.31 26 1.43 1.86 2.44 11 2.98 4.96 7.61 27 1.39 1.78 2.29 12 2.79 4.58 6.97 28 1.35 1.70 2.17 13 2.62 4.24 6.40 29 1.32 1.63 2.05 14 2.46 3.93 5.88 30 1.28 1.57 1.95 15 2.32 3.65 5.41 NV* 1 1 1

* “NV” means that the game ends with no volunteer. 136

TABLE 1.3: Observed frequency distributions of termination time by session for Conditions δ=0.3 and δ=0.6

Condition Termination Time NS* Total δ=0.3 0 1 2 to 4 5 to 7 8 to 10 11 to 13 14 to 16 17 to 30 Session 1 22 14 40 16 16 4 5 22 11 150 Session 2 14 6 44 25 6 16 10 16 13 150 Session 3 20 12 38 24 19 6 11 19 1 150 Session 4 23 15 35 24 13 5 11 21 3 150 Session 5 34 19 30 6 7 20 8 15 11 150 Across sessions 113 66 187 95 61 51 45 93 39 750

Condition Termination Time NS* Total δ=0.6 0 1 2 to 4 5 to 7 8 to 10 11 to 13 14 to 16 17 to 30 Session 1 64 29 27 15 5 1 1 7 1 150 Session 2 28 21 52 14 16 6 3 6 4 150 Session 3 32 23 68 16 4 0 1 6 0 150 Session 4 62 29 26 23 2 2 2 4 0 150 Session 5 47 21 42 27 4 0 4 5 0 150 Across sessions 233 123 215 95 31 9 11 28 5 750

* “NS” means that the game ends with no stopper.

137

TABLE 2.1: Predicted probabilities and observed relative frequencies of bids on the group and aggregate levels for Condition LUBA

Session 1 Session 2 Session 3 Session 4 Session 5 Aggregate Bid Predicted 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 0.177 0.183 0.093 0.120 0.120 0.110 0.157 0.107 0.157 0.173 0.141 0.139 0.133 2 0.127 0.100 0.097 0.157 0.083 0.070 0.123 0.107 0.133 0.113 0.113 0.109 0.156 3 0.180 0.197 0.153 0.157 0.123 0.183 0.107 0.120 0.143 0.157 0.141 0.163 0.157 4 0.140 0.130 0.140 0.127 0.123 0.103 0.153 0.133 0.160 0.177 0.143 0.134 0.143 5 0.063 0.097 0.150 0.150 0.107 0.110 0.153 0.127 0.100 0.097 0.115 0.116 0.118 6 0.087 0.067 0.133 0.107 0.120 0.130 0.087 0.083 0.107 0.083 0.107 0.094 0.084 7 0.077 0.073 0.077 0.060 0.083 0.047 0.050 0.093 0.047 0.050 0.067 0.065 0.050 8 0.037 0.040 0.057 0.037 0.037 0.040 0.020 0.047 0.037 0.020 0.037 0.037 0.026 9 0.030 0.013 0.013 0.013 0.043 0.030 0.027 0.060 0.033 0.020 0.029 0.027 0.016 10 0.013 0.017 0.030 0.010 0.030 0.017 0.013 0.010 0.023 0.003 0.022 0.011 0.013 11 to 25 0.070 0.083 0.057 0.063 0.130 0.160 0.110 0.113 0.060 0.107 0.085 0.105 0.103 138

TABLE 2.2: Predicted probabilities and observed relative frequencies of bids on the group and aggregate levels for Condition HUBA

Session 1 Session 2 Session 3 Session 4 Session 5 Aggregate Bid Predicted 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 15 0.030 0.007 0.003 0.000 0.023 0.007 0.010 0.007 0.007 0.000 0.015 0.004 0.000 16 0.007 0.000 0.000 0.000 0.010 0.003 0.003 0.000 0.020 0.000 0.008 0.001 0.000 17 0.010 0.010 0.010 0.000 0.020 0.010 0.010 0.000 0.003 0.000 0.011 0.004 0.000 18 0.010 0.007 0.043 0.027 0.017 0.000 0.007 0.003 0.013 0.000 0.018 0.007 0.000 19 0.043 0.037 0.040 0.077 0.057 0.020 0.023 0.010 0.043 0.017 0.041 0.032 0.001 20 0.063 0.083 0.067 0.057 0.073 0.073 0.060 0.050 0.073 0.103 0.067 0.073 0.045 21 0.093 0.157 0.107 0.077 0.087 0.073 0.077 0.117 0.083 0.097 0.089 0.104 0.111 22 0.140 0.140 0.187 0.163 0.107 0.147 0.123 0.177 0.177 0.173 0.147 0.160 0.168 23 0.187 0.163 0.133 0.167 0.193 0.183 0.207 0.210 0.147 0.173 0.173 0.179 0.205 24 0.173 0.200 0.163 0.193 0.190 0.240 0.200 0.193 0.187 0.223 0.183 0.210 0.228 25 0.243 0.197 0.247 0.240 0.223 0.243 0.280 0.233 0.247 0.213 0.248 0.225 0.244

139

TABLE 2.3: Results of the Kolmogorov-Smirnov test for Condition LUBA (“R”: Reject the null hypothesis of SMSE play; “FR”: Fail to reject the null hypothesis)

Session1 Session 2 Session 3 Session 4 Session 5 Subject 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 R R R R FR R FR FR FR R 2 FR FR FR FR R R R R FR R 3FRRRRRRRRRR 4 FR FR FR FR R R FR FR FR FR 5 FR R FR FR FR FR FR FR FR R 6 FR R FR FR FR FR R FR FR FR 7 R R R R R RFRRFRR 8 R FR R FR R FR FR FR FR FR 9FRFRR R R R R R R R 10 FR FR FR R R R FR FR FR R 140

TABLE 2.4: Results of the Kolmogorov-Smirnov test for Condition HUBA (“R”: Reject the null hypothesis of SMSE play; “FR”: Fail to reject the null hypothesis)

Session1 Session 2 Session 3 Session 4 Session 5 Subject 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 R R R R R FRFRFR R R 2FRR R R RFRR R R R 3 FR FR R R FR FR FR R R R 4 FR FR FR R R FR FR R FR R 5 FR R FR R FR FR R R FR FR 6 FR FR FR R R R FR R FR FR 7 R R R R R RFRFRR R 8 R R R FR R R R FR FR FR 9 R FR R R FR R R FR FR FR 10 R R R R FR FR R R FR FR 141

TABLE 2.5: Number of switching bids for Condition LUBA (at most 29 opportunities of switching for each of the first and last 30 rounds)

Session1 Session 2 Session 3 Session 4 Session 5 Subject 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 21 16 23 25 25 15 24 27 24 27 2 25 29 25 26 14 10 7 18 7 18 3 15 11 19 14 27 7 23 26 23 26 4 22 21 19 24 23 16 26 28 26 28 5 27 21 25 25 26 25 20 17 20 17 6 27 20 24 24 29 28 9 21 9 21 7 6 0 22 28 25 26 27 23 27 23 8 27 25 19 21 19 15 23 22 23 22 9 21 23 19 22 21 27 23 8 23 8 10 28 26 23 18 23 18 18 13 18 13

142

TABLE 2.6: Number of switching bids for Condition HUBA (at most 29 opportunities of switching for each of the first and last 30 rounds)

Session1 Session 2 Session 3 Session 4 Session 5 Subject 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 to 30 31 to 60 1 17 16 17 11 22 24 26 19 19 16 2 20 15 28 29 20 12 5 0 10 5 3 15 16 11 9 20 16 22 8 14 0 4 23 25 19 16 21 19 23 24 20 19 5 13 17 23 16 21 14 23 27 25 23 6 23 19 26 14 18 11 22 20 27 21 7 11 2 8 14 16 16 20 19 7 0 8 21 6 20 20 21 20 19 15 20 18 9 23 19 15 17 9 10 21 12 25 26 10 25 27 15 3 23 18 16 15 20 12

143

TABLE 2.7: Four categories of subjects for Condition LUBA

Rounds 1 to 30 Session 1 Session 2 Session 3 Session 4 Session 5 Total Neither 2 3 3 2 0 10 (A) only 2 1 0 2 4 9 (B) only 1 2 4 2 2 11 (A) & (B) 5 4 3 4 4 20

Rounds 31 to 60 Session 1 Session 2 Session 3 Session 4 Session 5 Total Neither 5 2 5 2 4 18 (A) only 1 1 1 3 1 7 (B) only 0 3 2 2 3 10 (A) & (B) 4 4 2 3 2 15

144

TABLE 2.8: Four categories of subjects for Condition HUBA

Rounds 1 to 30 Session 1 Session 2 Session 3 Session 4 Session 5 Total Neither 2 5 1 2 3 13 (A) only 2 0 1 0 0 3 (B) only 3 2 5 3 1 14 (A) & (B) 3 3 3 5 6 20

Rounds 31 to 60 Session 1 Session 2 Session 3 Session 4 Session 5 Total Neither 5 9 3 3 4 24 (A) only 1 0 3 2 1 7 (B) only 1 0 1 3 1 6 (A) & (B) 3 1 3 2 4 13

145

TABLE 2.9: Results from a fixed effects regression for each of the five sessions by condition

Independent Dependent variable: Individual bid at t variables Session 1 Session 2 Session 3 Session 4 Session 5 Condition LUBA 0.0069 0.2083** 0.1252** 0.1480** 0.1662 Winning bid at t-1 (0.0673) (0.0787) (0.0480) (0.0476) (0.1145) 0.2029** 0.1373 0.0455 0.0591 0.1320 Winning bid at t-2 (0.0689) (0.0780) (0.0480) (0.0478) (0.1164) -0.0072 -0.1285 0.0470 0.0156 -0.0132 Winning bid at t-3 (0.0693) (0.0786) (0.0482) (0.0474) (0.1155) R2 0.015428 0.023460 0.015638 0.018310 0.005571 Condition HUBA 0.0159 0.0381* 0.0108 0.0546** 0.0532** Winning bid at t-1 (0.0187) (0.0164) (0.0162) (0.0128) (0.0170) 0.0117 -0.0090 0.0194 0.0273* 0.0384* Winning bid at t-2 (0.0189) (0.0164) (0.0162) (0.0128) (0.0170) 0.0088 -0.0156 0.0171 0.0215 0.0021 Winning bid at t-3 (0.0187) (0.0164) (0.0162) (0.0131) (0.0170) R2 0.002047 0.011780 0.005170 0.047791 0.025474 Number of observations = 570 ( df = 557) Standard errors appear in parenthesis. * : At 5% significance level ** : At 1% significance level 146

TABLE 3.1: Examples of a discrete bottleneck game with parameters n=10, α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60} and t* =50: s=1/3 (top panel) and s=3 (bottom panel)

Player number Departure time Travel time Arrival time Travel cost 1 30 3 33 13.2 2 34 3 37 10.8 3 34 6 40 12 4 34 9 43 13.2 5 40 6 46 8.4 6 42 7 49 7.6 7 44 8 52 12.8 8 45 10 55 22 9 47 11 58 30.2 10 48 13 61 39.4

Player number Departure time Travel time Arrival time Travel cost 1 48 1 49 1.6 2 48 1 49 1.6 3 48 1 49 1.6 4 48 2 50 2 5 49 1 50 1 6 49 1 50 1 7 49 2 51 4.4 8 49 2 51 4.4 9 49 2 51 4.4 10 49 3 52 7.8

147

TABLE 3.2: Symmetric mixed-strategy equilibrium solutions for the cases n=4, s=1, α=1, β∈{0.25, 0.5}, γ=2, and t∈{t* -8, …, t* , …, t* +8} (NP = Not Provided)

t*-8 t*-7 t*-6 t*-5 t*-4 t*-3 t*-2 t*-1 t* C T β=0.25 O&R* 0 0 0.038 0.148 0.239 0.288 0.200 0.086 0 2.336 1.774 ZKMD 0 0 0.038 0.148 0.239 0.288 0.200 0.086 0 2.34 NP O&R 0 0 0 0.076 0.287 0.347 0.212 0.079 0 2.085 1.583

β=0.5 O&R* 0 0 0 0 0.262 0.414 0.219 0.105 0 2.893 2.129 ZKMD 0 0 0 0 0.262 0.414 0.219 0.105 0 2.89 NP O&R 0 0 0 0 0.172 0.602 0.054 0.172 0 2.629 1.786

148

TABLE 3.3: Comparison of the Vickrey’s and O&R’s bottleneck models with parameters n=10, α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60}, t* =50, and s∈{1/4, 1/3, 1/2, 1, 2, 3, 4}

Model C TC TTT TTC tF tL s=4 Vickrey 1.2 12 6 6 48 50.5 O&R 2.201 22.011 15.884 15.884 47 48 Vickrey/O&R (%) 54.5 54.5 37.8 37.8

s=3 Vickrey 1.6 16 8 8 47.333 50.667 O&R 2.802 28.018 20.230 20.230 46 47 Vickrey/O&R (%) 57.1 57.1 39.5 39.5

s=2 Vickrey 2.4 24 12 12 46 51 O&R 3.431 34.313 22.151 22.151 45 50 Vickrey/O&R (%) 69.9 69.9 54.2 54.2

s=1 Vickrey 4.8 48 24 24 42 52 O&R 5.858 58.579 34.402 34.402 41 50 Vickrey/O&R (%) 81.9 81.9 69.8 69.8

s=1/2 ( d=2) Vickrey 9.6 96 48 48 34 54 O&R 11.403 114.028 65.566 65.566 33 51 Vickrey/O&R (%) 84.2 84.2 73.2 73.2

s=1/3 ( d=3) Vickrey 14.4 144 72 72 26 56 O&R 17.081 170.808 98.160 98.160 24 51 Vickrey/O&R (%) 84.3 84.3 73.3 73.3

s=1/4 ( d=4) Vickrey 19.2 192 96 96 18 58 O&R 22.773 227.727 130.894 130.894 15 52 Vickrey/O&R (%) 84.3 84.3 73.3 73.3

149

TABLE 3.4: Comparison of the Vickrey’s and O&R’s bottleneck models with parameters s=1, α=1, β=0.6, γ=2.4, t∈{1, 2, …, 60}, t* =50, and n∈{5, 10, 15, 20, 30, 40, 50}

Model C TC TTT TTC tF tL n=5 Vickrey 2.4 12 6 6 46 51 O&R 3.470 17.352 11.272 11.272 45 48 Vickrey/O&R (%) 69.2 69.2 53.2 53.2

n=10 Vickrey 4.8 48 24 24 42 52 O&R 5.858 58.579 34.402 34.402 41 50 Vickrey/O&R (%) 81.9 81.9 69.8 69.8

n=15 Vickrey 7.2 108 54 54 38 53 O&R 8.282 124.233 70.092 70.092 37 51 Vickrey/O&R (%) 86.9 86.9 77.0 77.0

n=20 Vickrey 9.6 192 96 96 34 54 O&R 10.666 213.320 117.053 117.053 33 52 Vickrey/O&R (%) 90.0 90.0 82.0 82.0

n=30 Vickrey 14.4 432 216 216 26 56 O&R 15.470 464.106 247.788 247.788 25 54 Vickrey/O&R (%) 93.1 93.1 87.2 87.2

n=40 Vickrey 19.2 768 384 384 18 58 O&R 20.273 810.901 426.554 426.554 17 56 Vickrey/O&R (%) 94.7 94.7 90.0 90.0

n=50 Vickrey 24 1200 600 600 10 60 O&R 25.074 1253.704 653.337 653.337 9 58 Vickrey/O&R (%) 95.7 95.7 91.8 91.8

150

TABLE 3.5: Symmetric mixed-strategy equilibrium solutions for the case where n=10 (columns 2 and 3) and the case where n is either 8 with probability 0.6 or 12 with probability 0.4 (columns 4 and 5) with parameters s=1, α=1, β=0.6, γ=2.4, t∈{1, …, 60}, and t* =50.

Pr( n = )8 = 6.0 n = 10 and Time Pr( n = 12 ) = 4.0 Cumulative Cumulative Probability Probability Probability Probability 1 to 39 0 0 0 0 40 0 0 0 0 41 0.032 0.032 0.239 0.239 42 0.358 0.390 0.285 0.524 43 0.175 0.564 0.142 0.666 44 0.187 0.751 0.097 0.763 45 0.028 0.779 0.061 0.824 46 0.100 0.879 0.061 0.885 47 0.004 0.884 0.044 0.929 48 0.075 0.959 0.048 0.977 49 0 0.959 0.023 1 50 0.041 1 0 1 51 to 60 0 1 0 1 C 5.858 6.231

151

TABLE 3.6: Symmetric mixed-strategy equilibrium solutions for three different costs of choosing an alternative transportation mode not subject to congestion with n=10, s=1, α=1, β=0.6, γ=2.4, t∈{1, …, 60}, and t* =50.

C Time alt 7 5 3 1 to 40 0 0 0 41 0.032 0 0 42 0.357 0 0 43 0.174 0.222 0 44 0.187 0.267 0 45 0.028 0.068 0 46 0.100 0.107 0.107 47 0.004 0.022 0.159 48 0.075 0.070 0 49 0 0.008 0.053 50 0.041 0 0 51 to 60 0 0 0 palt 0 0.235 0.681 C 5.858 5 3

152

APPENDIX C: INSTURCTIONS

Instructions for Chapter 1, Condition δ=0.3

Group Decisions in Real Time: Subject Instructions Introduction Welcome to the “Group Decisions in Real Time” experiment. During this experiment you will be asked to make a large number of decisions and so will the other participants. Your decisions, as well as the decisions of the other participants, will determine your monetary payoff according to the rules that will be explained shortly. The money that you earn during the experiment has been provided by a grant agency. It will be paid to you in cash at the end of the session. Please read the instructions carefully. If you have any questions, please feel free to raise your hand. One of the experimenters will come to assist you. From now on communication between the participants is strictly prohibited. If the participants communicate with one another by any shape or form, the experiment will be terminated. Description of the Task A total of 9 persons participate in this experiment (i.e., you and 8 other participants). During the experiment, all of these persons will participate in a series of 50 identical rounds. At the beginning of each round, the computer will randomly divide the 9 players into 3 groups of 3 members each. The composition of your group will change randomly from round to round (i.e., the people you play with in one round will not necessarily be the same people you played with in the previous round). Throughout all 50 rounds, your identity will not be disclosed to the other group members and their identities will not be disclosed to you. Decisions . Each round is played over time, which is measured by a clock. Rather than dealing with fractions of a second, the time in each round is divided into 30 steps, each lasting about 1.5 seconds. In each round, you, as well as the other two members of your group, will be asked to decide whether and what time to stop the clock. This is the only 153 decision you’ll be asked to make. The mechanics of doing so will be explained later. The player who is the first to stop the clock will be designated as the Stopper , and the others will be designated as Non-stoppers . In case multiple group members stop the clock at the same period, they will all be designated as Stoppers. No player is compelled to stop the clock; it is entirely possible that 30 steps elapse without any player stopping the clock. Payoffs . Payoffs are associated with the time the clock is stopped. As the clock progresses through the round, the payoffs of the Stopper and of the Non-stoppers will decrease but not at the same rate. The basic idea here is to capture the fact that stopping the clock is costly, and that this cost increases with time. Therefore, Stoppers will always earn less than Non-stoppers. If no player stops the clock, or if the clock is stopped relatively late in the round, all group members will earn less than if the clock is stopped early in the round. The payoffs will stop decaying once the clock is stopped . Examples To illustrate how payoffs depend on the time of stopping the clock, please consult Table 1 at the end of the instructions. This table shows the relationship between the time at which the clock is stopped and the payoffs (measured in points) of the Stopper and Non- stoppers. See also Figure 1 that exhibits this relationship graphically. Please consider the following numerical examples: Example 1 : Thirty steps have elapsed with no player stopping the clock. In this case, each of the group members earns 1 point. Example 2 : One of the group members was the first to stop the clock at step 5. Then, the Stopper earns 4.62 points whereas each of the Non-stoppers earns 13.08 points. Example 3 : Two members in your group stopped the clock at step 10. Then, each of these two stoppers earns 3.19 points whereas the other group member earns 8.31 points. Example 4 : All group members simultaneously stopped the clock at step 20. Then, each of them earns 1.80 points. Description of the Decision Screen Figure 1 displays a copy of the computer screen on your PC that will be presented to you on each round. A box right above a red circle is a clock, which runs from 0 to 30 steps. A 154 diagram at the middle of the screen plots the payoffs of the Stopper (black curve) and each of the Non-stoppers (blue curve) against the elapsed steps. As the clock progresses, you will observe red dots moving along the two payoff curves indicating the current payoffs of the Stopper and Non-stoppers. Along the right side of the payoff diagram, two boxes display the payoffs of the Stopper and Non-stoppers. These payoffs will be updated on each step as the clock progresses to a new step. A purple horizontal bar perpendicular to the x-axis (elapsed steps) at step 30 indicates that step 30 is the last step that you can choose to stop the clock. Finally, a box on the top-left corner displays the number of the current round, your total score (= total points), and the payment you will receive at the end of the session. Stopping the Clock . We’ll now explain how to stop the clock. Once all the participants have indicated that they are ready to start the current round, the computer screen showed in Figure 1 will appear on your PC. Until the moment the clock starts, the computer will keep the mouse pointer inside the red circle. Once the clock starts, you can stop the clock at any step you want. All you need to do to is simply move the mouse pointer outside of the red circle . We use this procedure to eliminate any noise due to clicking. If someone else in your group is the first to stop the clock, all the other group members will immediately be notified. No other player will be in a position to stop the clock as the computer will be automatically immobilized. In that case, please wait until all 30 steps elapse. Interpreting the Results At the end of each round, after 30 steps have elapsed, a Results screen will appear. Figure 2 at the end of the instructions shows an example of this screen. The Results screen shows whether you stopped the clock, at which step you stopped the clock, if you did so, at which step the first participant stopped the clock, and your payoff for the current round. In the example shown in Figure 2, • You did not stop the clock . • One of the two other group members was the first to stop the clock at step 15 . • Your payoff for this round was 5.41 points . 155

End of Experiment

After completing all 50 rounds, a summary screen will display the total points you have accumulated and the corresponding earnings in dollars (Points will be converted to money at the rate 20 points = $1.00). During the experiment, the cumulative number of points and dollar payoff that you have earned will be displayed in the box on the top-left corner of your computer display. Please remain at your desk until asked to come forward and receive payment for the experiment. Please place the instructions on the table in front of you to indicate that you have completed reading them. The experiment will begin shortly. Initially, you will play unpaid practice rounds that familiarize you with how to stop the clock by moving the mouse. You may repeat the practice rounds as many times as you wish until you feel comfortable with the program. Once you are finished with the practice rounds, click on the “I’m ready to start playing” button. Once everyone has clicked on this button, the 50 paid rounds will follow. Please remember that no communication is allowed during the experiment. If you encounter any difficulties please raise your hand and someone will assist you. 156

Table 1: Payoff Table

Points Points Step Step Stopper Non-stopper Stopper Non-stopper 0 6.99 20.95 16 2.20 4.99 1 6.41 19.05 17 2.08 4.60 2 5.90 17.33 18 1.98 4.26 3 5.43 15.77 19 1.88 3.94 4 5.01 14.36 20 1.80 3.66 5 4.62 13.08 21 1.72 3.40 6 4.28 11.93 22 1.65 3.17 7 3.96 10.88 23 1.59 2.96 8 3.68 9.94 24 1.53 2.76 9 3.42 9.08 25 1.48 2.59 10 3.19 8.31 26 1.43 2.44 11 2.98 7.61 27 1.39 2.29 12 2.79 6.97 28 1.35 2.17 13 2.62 6.40 29 1.32 2.05 14 2.46 5.88 30 1.28 1.95 15 2.32 5.41 NS** 1 1

** “NS” implies that a round ends with no stopper.

157

Figure 1: Decision Screen

Figure 2: Results Screen

158

Instructions for Chapter 2, Condition LUBA

Lowest Unique Bid Auction: Instructions

Introduction

Welcome to the “ Lowest Unique Bid Auction ” experiment. The purpose of this experiment is to study a variant of a new type of auction that has become quite popular on the Internet. During the experiment, you’ll be asked to make a large number of decisions and so will the other participants. Your decisions, as well as the decisions of the other participants, will determine your earnings according to the rules of the auction that will be explained below. The money that you’ll earn will be paid to you in cash at the end of the session.

Please read the instructions carefully. If you have any questions, please raise your hand and one of the experimenters will come to assist you.

From now on communication between the participants is forbidden. If the participants communicate in any shape or form, the experiment will terminate.

Description of the Experiment

A total of 10 participants (yourself included) will take part in this experiment. During the experiment, all the ten players will participate in a series of 60 identical rounds. Although you’ll be informed at the end of each round of the decisions made by the other participants, their identities will not be disclosed to you. Nor will you be able to associate any decision with any participant. Similarly, your identity will not be revealed to the other participants.

Rules of the Auction

On each round, you will participate in an auction. To do so, you will be asked to enter a bid, which is an integer between 1 and 25 . You may enter any bid within this range. Your bid, as well as the other participants’ bids, will be entered independently and anonymously.

The winner of the auction will be the player who enters the lowest bid, provided that this bid is unique (no other player enters the same bid). If two or more players enter the same bid, then their bids will be discarded. If there is no lowest unique bid, then there will be no winner in the round. The winner, if there is one, will earn the value of his/her bid (in points). Every other group player will earn nothing.

159

Here is a brief summary of the rules of the auction. Each player enters a bid. The winner is the player who enters the lowest bid, provided that it is unique. If there is a winner, then he/she earns the value of his/her bid, whereas every other player wins nothing.

Examples

To illustrate the auction mechanism, please refer to the examples below. In the examples, 10 bids are presented in an ascending order (the same way they will presented to you during the experiment), one bid per player.

Example 1

Bids 2 2 6 6 6 8 16 21 24 24

Outcome: The winning bid is 8, which yields 8 points to its bidder.

Example 2

Bids 10 11 11 11 17 17 18 20 20 25

Outcome: The winning bid is 10, which yields 10 points to its bidder.

Example 3

Bids 5 5 5 9 9 12 12 12 12 18

Outcome: The winning bid is 18, which yields 18 points to its bidder.

Example 4

Bids 1 1 4 4 4 9 9 21 21 21

Outcome: As there is no unique bid, there is no winner in this auction.

Description of the Screens

Decision screen . Please type in your bid for the round by using the mouse cursor to press the numbered keys. At any point you can change your bid by pressing the “C” clear button. The range of the bids appears on this screen. The computer will not accept any bid smaller than 1 or higher than 25.

Once you are satisfied with your bid, please press the Confirm button.

160

On the right-hand side of the screen, there is a button labeled History. Please press this button whenever you wish to receive information about the auctions that took place in the previous rounds (see below).

At the upper right-hand part of the screen there is a box showing the current round number, the total number of points you have earned, and your cumulative payoff (in dollars) for the session.

History screen . You obtain access to this screen by pressing the “History” button on the Decision screen. The History screen presents a table of all the auctions that were held in the previous rounds. Each auction is presented in a separate column as follows: Row 1: The round number. Row 2: Your previous bids. Row 3: The values of the winning bids. Other rows: All the bids entered in the auction. By pushing the arrows left or right, you may view auctions that were held at earlier or later rounds, respectively. Pressing the “Back” button will take you back to the Decision screen.

The History screen is for your convenience only. It keeps track of the entire history of the session in case you wish to consult it.

Results screen . This screen lists your bid for the round, all the bids on that round, the winning bid, whether you won the auction on that round, and your payoff for the round. After examining the results of the round, please press the Next button to continue to the next round.

Payoff

At the end of the experiment, you’ll be paid in cash $1.00 for every 1.5 points you earn. You’ll be asked to sign a consent form and a receipt for your payment.

Please remember once again that no communication is allowed during the experiment. If you encounter any difficulties during the experiment, please raise your hand and one of the experimenters will come to assist you.

The experiment will begin shortly once all the participants complete reading the instructions.

161

REFERENCES

Arnott, Richard, André de Palma, and Robin Lindsey. 1990. Economics of a bottleneck. Journal of Urban Economics 27 (1): 111-30.

______. 1991. Does providing information to drivers reduce traffic congestion? Transportation Research Part A 25 (5): 309-18.

______. 1993. A structural model of peak-period congestion: A traffic bottleneck with elastic demand. American Economic Review 83 (1): 161-79.

______. 1998. Recent developments in the bottleneck model. In Road Pricing, Traffic Congestion and the Environment: Issues of Efficiency and Social Feasibility , edited by K. J. Button and E. T. Verhoef, 79-110. Chetltenham, UK: Edward Elgar Publishing.

______. 1999. Information and time-of-usage decisions in the bottleneck model with stochastic capacity and demand. European Economic Review 43 (3): 525-48.

Baye, Micheal R., Dan Kovenock, and Casper G. de Vries. 1994. The solution to the Tullock rent-seeking game when R>2: Mixed-strategy equilibria and mean dissipation rates. Public Choice 81 (3-4): 363-80.

Bilodeau, Marc, Jason Childs, and Stuart Mestelman. 2004. Volunteering a public service: An experimental investigation. Journal of Public Economics 88 (12): 2839-55.

Bilodeau, Marc, and Al Slivinski. 1996. Toilet cleaning and department chairing: Volunteering a public service. Journal of Public Economics 59 (2): 299-308.

Bliss, Christopher, and Barry Nalebuff. 1984. Dragon-slaying and ballroom dancing: The private supply of a public good. Journal of Public Economics 25 (1-2): 1-12.

162

Bulow, Jeremy, and Paul Klemperer. 1999. The generalized war of attrition. American Economic Review 89 (1): 175-89.

Camerer, Colin F. 2003. Behavioral Game Theory: Experiments in Strategic Interaction . Princeton, NJ: Princeton University Press.

Cooper, Robert B. 1981. Introduction to Queueing Theory . New York, NY: Elsevier North-Holland.

Daganzo, Carlos F. 1985. The uniqueness of a time-dependent equilibrium distribution of arrivals at a single bottleneck. Transportation Science 19 (1): 29-37.

Daniel, Joseph I. 2001. Distributional consequences of airport congestion pricing. Journal of Urban Economics 50 (2): 230-58.

Daniel, Terry E., Eyran Gisches, and Amno Rapoport. 2007. Departure time in Y-shaped traffic networks with multiple bottlenecks. Working paper. Available at: http://www.u.arizona.edu/~amnon/.

Dasgupta, Partha, and Eric Maskin. 1986. The existence of equilibrium in discontinuous games, I: Theory. Review of Economic Studies 53 (1): 1-26.

Davis, Douglas D., and Charles A. Holt. 1993. Experimental Economics . Princeton, NJ: Princeton University Press.

Dawkins, Richard. 1976. The Selfish Gene . New York, NY: Oxford University Press.

Diekmann, Andreas. 1985. Volunteer’s dilemma. Journal of Conflict Resolution 29 (4): 611-18.

______. 1993. Cooperation in an asymmetric volunteer’s dilemma: Theory and experimental evidence. International Journal of Game Theory 22 (1): 75-85.

163

Dufwenberg, Martin and Uri Gneezy. 2000. Price and market concentration: An experimental study. International Journal of Industrial Organization 18 (1): 7-22.

______. 2002. Information disclosure in auctions: An experiment. Journal of Economic Behavior and Organization 48 (4): 431-44.

Emmerink, Richard H. M., and Paul van Beek. 1997. Empirical analysis of work schedule flexibility: Implications for road pricing and driver information systems. Urban Studies 34 (2): 217-34.

Fudenberg, Drew, and Jean Tirole. 1986. A theory of exit in duopoly. Econometrica 54 (4): 943-60.

______. 1991. Game Theory . Cambridge, MA: The MIT Press.

Glazer, Amihai, and Rafael Hassin. 1987. Equilibrium arrivals in queues with bulk service at scheduled times. Transportation Science 21 (4): 273-78.

Gneezy, Uri. 2005. Step-level reasoning and bidding in auctions. Management Science 51 (11): 1633-42.

Groves, Theodore, and John O. Ledyard. 1977. Optimal allocation of public goods: A solution to the ‘free rider’ problem. Econometrica 45 (4): 783-809.

Hendricks, Ken, Andrew Weiss, and Charles Wilson. 1988. The war of attrition in continuous time with complete information. International Economic Review 29 (4): 663-80.

Hendrickson, Chris, and George Kocur. 1981. Schedule delay and departure time decisions in a deterministic model. Transportation Science 15 (1): 62-77.

Krishna, Vijay. 2002. . San Diego, CA: Academic Press.

164

Kuhn, Harold W. 1953. Extensive games and the problem of information. In Contribution to the Theory of Games II , edited by H. W. Kuhn and A. W. Tucker, 193-216. Princeton, NJ: Princeton University Press.

Ledyard, John O. 1995. Public goods: A survey of experimental research. In Handbook of Experimental Economics , edited by A. E. Roth and J. Kagel, 111-94. Princeton, NJ: Princeton University Press.

Levinson, David. 2005. Micro-foundations of congestion and pricing: A game theory perspective. Transportation Research Part A 39 (7-9): 691-704.

Maynard Smith, John. 1974. The theory of games and the evolution of animal conflicts. Journal of Theoretical Biology 47 (1): 209-21.

Murphy, Ryan O., Amnon Rapoport, and James E. Parco. 2006a. Breakdown of cooperation in iterative real-time trust dilemmas. Experimental Economics 9 (2): 147-66.

______. 2006b. Credible signaling in symmetric real-time trust dilemmas. Working paper, Center for the Decision Sciences, Columbia University.

Myerson, Roger B. 1998. Population uncertainty and Poisson games. International Journal of Game Theory 27 (3): 375-92.

Myerson, Roger B. 2000. Large Poisson games. Journal of Economic Theory 94 (1): 7- 45.

Nash, John F. 1951. Non-cooperative games. Annals of Mathematics 54 (2): 286-95.

Oprea, Ryan, Bart J. Wilson, and Arthur Zillante. 2008. An experiment on exit in duopoly. Working paper. Available at: http://www1.chapman.edu/~bjwilson/.

Osborne, Martin J., and Ariel Rubinstein. 1994. A Course in Game Theory . Cambridge, MA: The MIT Press. 165

Östling, Robert, Joseph Tao-yi Wang, Eileen Chou, and Colin F. Camerer. 2007. Field and lab convergence in Poisson LUPI games. SSE/EFI Working Paper Series in Economics and Finance No. 671. Available at S-WoPEc: http://swopec.hhs.se/hastef/abs/hastef0671.htm.

Phillips, Owen R., and Charles F. Mason. 1997. Wars of attrition in experimental duopoly markets. Southern Economic Journal 63(3): 726-42.

Popper, Karl R. 1959. The Logic of Scientific Discovery . London, UK: Unwin Hyman.

Rapoport, Amnon, and Wilfred Amaldoss. 2000. Mixed strategies and iterative elimination of strongly dominated strategies: An experimental investigation of states of knowledge. Journal of Economic Behavior and Organization 42 (4): 483-521.

______. 2004. Mixed-strategy play in single-stage first-price all-pay auctions with symmetric players. Journal of Economic Behavior and Organization 54 (4): 585- 607.

Rapoport, Amnon, Tamar Kugler, Subhasish Dugar, and Eyran Gisches. In press. Choice of routes in congested traffic networks: Experimental tests of the Braess paradox. Games and Economic Behavior . doi:10.1016/j.geb.2008.02.007.

Rapoport, Amnon, Darryl A. Seale, and Eyal Winter. 2002. Coordination and learning behavior in large group with asymmetric players. Games and Economic Behavior 39 (1): 111-36.

Rapoport, Amnon, William E. Stein, James E. Parco, and Darryl A. Seale. 2004. Equilibrium play in single-server queues with endogenously determined arrival times. Journal of Economic Behavior and Organization 55 (1): 67-91.

Raviv, Yaron and Gabor Virag. 2007. Gambling by auctions. Available at SSRN: http://ssrn.com/abstract=905606.

166

Schneider, Kerstin, and Joachim Weimann. 2004. Against all odds: Nash equilibria in a road pricing experiment. In Human Behaviour and Traffic Networks , edited by M. Schreckenberg and R. Selten, 133-53. Berlin: Springer.

Seale, Darryl A., James E. Parco, William E. Stein, and Amnon Rapoport. 2005. Joining a queue or staying out: Effects of information structure and service time on arrival and staying out decisions. Experimental Economics 8 (2): 117-44.

Seale, Darryl A., and Amnon Rapoport. 2000. Elicitation of strategy profiles in large group coordination games. Experimental Economics 3 (2): 153-79.

Small, Kenneth A. 1982. The scheduling of consumer activities: Work trips. American Economic Review 72 (3): 467-79.

Smith, M. J. 1983. The existence and calculation of traffic equilibria. Transportation Research Part B 17 (4): 291-303.

Stein, William E., Amno Rapoport, Darryl A. Seale, Hongtao Zhang, and Rami Zwick. 2007. Batch queues with choice of arrival: Equilibrium analysis and experimental study. Games and Economic Behavior 59 (2): 345-63.

Swarthout, J. Todd, and Mark Walker. 2007. Discrete implementation of the Groves- Ledyard mechanism. Andrew School of Policy Studies Research Paper Series No. 07-31. Available at SSRN: http://ssrn.com/abstract=1024479.

Shapira, H., and Ilan Eshel. 2000. On the volunteer’s dilemma I: Continuous-time decision. Selection 1 (1-3): 57-66.

Turocy, Theodore L., Elizabeth Watson, and Raymond C. Battalio. 2007. Framing the first-price auction. Experimental Economics 10 (1): 37-51.

Vickrey, William S. 1969. Congestion theory and transport investment. American Economic Review 59 (2): 251-60.

167

Vorob’ev, Nikolai N. (1977). Game Theory: Lectures for Economics and System Scientists . Berlin: Springer.

Weesie, Jeroen. 1993. Asymmetry and timing in the volunteer’s dilemma. Journal of Conflict Resolution 37 (3): 569-90.

Weesie, Jeroen. 1994. Incomplete information and timing in the volunteer’s dilemma: A comparison of four models. Journal of Conflict Resolution 38 (3): 557-85.

Weesie, Jeroen, and Axel Franzen. 1998. Cost sharing in a Volunteer’s dilemma. Journal of Conflict Resolution 42 (5): 600-18.

Wilson, Paul W. 1988. Wage variation resulting from staggered work hours. Journal of Urban Economics 24 (1): 9-26.

Xin, Wuping, and David Levinson. 2007. Stochastic congestion and pricing model with endogenous departure time selection and heterogeneous travelers. The 86 th Annual Meeting of the Transportation Research Board, Washington, D.C., Conference Paper No 07-1035.

Ziegelmeyer, Anthony, Frédéric Koessler, Kene Boun My, Laurent Denant-Boèmont. 2008. Road traffic congestion and public information: An experimental investigation. Journal of Transport Economics and Policy 42(1): 43-82.

Zou, Xi, and David Levinson. 2006. A multi-agent congestion and pricing model. Transportmetrica 2 (3): 237-49.