Optimal Emissions of Greenhouse Gasses Under Stochastic Catastrophe Risk

Oliver Browne Supervisor: Dr Stephen Poletti

This dissertation is presented in part fulfilment of the requirements for the Degree of BA(hons) of The University of Auckland.

6/27/2011

Abstract

A central issue in climate change economics is estimating the optimal trajectory of greenhouse gas emissions. The standard tool for estimating these trajectories are Integrated Assessment Models (IAMs). Most IAMs are non-stochastic and parameterised by scientific best estimates of what are profoundly uncertain parameters. Because they fail to model this uncertainty, these IAMs may understate the extent to which we should abate future greenhouse gas emissions. One risk which is not captured by existing IAMs is the possibility of a low probability, high cost catastrophe, where there is a dramatic change in the earth’s climate, leading to large and rapid economic damages. To model this risk I construct a modified version of Nordhaus’ DICE model in which there are multiple states of the world, with and without climate catastrophes for each period of time. Probability of a catastrophe occurring is governed by the degree to which global average temperatures have increased above pre- industrial levels. This model with stochastic tipping points leads to a more conservative emissions trajectory than if there is no risk of such a tipping point. This suggests one way in which the non- stochastic IAMs currently used for policy analysis may overstate the socially optimal level of greenhouse emissions.

ii

Table of Contents Abstract ...... ii

1. Introduction ...... - 1 -

2. Further Background ...... - 4 -

2.1. Abrupt Climate Change ...... - 4 -

2.2. Uncertainty, Irreversibility and Option values ...... - 6 -

2.3. Analytic Economic Models of Environmental Catastrophe ...... - 7 -

1.1. Uncertainty in Integrated Assessment Modelling ...... - 9 -

1.2. Fat Tailed Uncertainty and the Dismal Theorem ...... - 11 -

1.3. Implementing Emissions Abatements ...... - 13 -

2. Model ...... - 15 -

2.1. Ramsey Growth ...... - 15 -

2.2. Global Warming Module ...... - 17 -

2.3. Greenhouse Gas Abatements ...... - 19 -

2.4. Stochastic Catastrophe Risk ...... - 20 -

2.5. Parameterisation, Coding and Implementation ...... - 24 -

3. Results ...... - 26 -

3.1. Optimal policy with and without catastrophe risk ...... - 26 -

3.2. Sensitivity of Results to catastrophe parameters ...... - 30 -

3.3. Optimal Policy Recovery after a catastrophe ...... - 34 -

3.4. Comparison with Nordhaus’ DICE-2007 Model ...... - 37 -

4. Conclusions ...... - 41 -

Appendices ...... I

I. Code: Model File ...... I

II. Code: Data File ...... IV

III. Table of Parameters ...... V

IV. Fitting Damage Function Parameters ...... IX

References ...... X

iii

1. Introduction

There is increasing scientific consensus that human induced climate change is causing an appreciable warming of our planet (IPCC, 2007a). In spite of this, there is still much uncertainty and ambiguity about the extent to which this is occurring, the impacts of this warming on the environment and the cost of these impacts on human activities and welfare. Further to this ambiguity, researchers are also beginning to recognise the risks of climate catastrophes or irreversible climate tipping points, which although unlikely, could cause much greater damage than average forecasts currently predict. Contingent on these risks, it is important that policies are in place to ensure that society manages its emissions of greenhouse gasses in a manner that achieves the best social outcomes.

Lord Nicolas Stern (2006) in his review of the subject describes climate change as the “biggest market failure the world has ever seen”. He says this because at its core climate change is an economic problem. More specifically climate change is what economists call an problem. An externality is when a person’s action (such as the emission of greenhouse gasses) imposes a cost on others (in terms of damage from climate change reducing the future welfare) which the person making the action does not bear the full cost of. Because of this, emitters do not have the incentive to behave in a socially optimal manner.

Economics has a simple solution to this problem: emitters should pay the full social cost of their emissions. This can be implemented by imposing a tax on carbon emissions, or (almost) equivalently and seemingly more likely, a tradable cap on emissions. This raises the question, what is the social cost of greenhouse emissions? Or equivalently what should be the optimal trajectory of greenhouse emissions over time?

In order to design a sensible climate policy it is necessary to know the extent to which society should reduce their greenhouse emissions to prevent climate change. Typically when scientists and environmentalists discuss global warming policy, they advocate fixed targets for climate change such as 350 ppm CO2e (McKibben, 2011) or 2o C above preindustrial levels (Meinshausen et al, 2009). They justify these targets by arguing that above these limits the costs of climate change on human activities, as well as the uncertainty about the eventual outcome are too large.

To economists, this approach is not good enough because it does not consider the costs to society of reducing greenhouse gas emissions. Money invested in technology or capital to reduce greenhouse gasses cannot be spent on consumption or other types of technological or capital investment which could increase our future welfare. A better policy would be one which trades off the costs to society of a changing climate against the costs to society of undertaking expensive abatements.

- 1 -

This cost benefit principle has given rise to the class of economic models called ‘Integrated Assessment Models’ (IAMs). These models typically combine a neo-classical economic growth model with a simplified global climate model and a choice to spend output ‘abating’ greenhouse gas emissions to mitigate global warming. In these models, a benign central planner choses a fraction of gross world product in each period to invest in capital and greenhouse gas abatement in order to maximise the present value of expected future utility.

IAMs are the standard economic tool used to find the optimal trajectory of greenhouse gas emissions. They have gained widespread use in the academic literature (Nordhaus 2008, Tol 2003, Hope 2002) as well as in governmental (Stern 2006, Mendelsohn 2004) and intergovernmental reviews (IPCC 2007c).

The main weakness of IAMs is that their results are highly sensitive to a number of crucial assumptions. These assumptions include social preferences such as discount rates and risk aversion parameters which are difficult to know or aggregate across societies with vastly different values. Other assumptions depend on the future paths of technological progress such as productivity growth and the future cost of emissions abatements. Finally some assumptions are on the behavior of the climate, for which much of the science is imperfect or still developing, and its relationship between climate and economy, where many of the impacts are also unknown. (Ackerman et al, 2009)

Although there is profound uncertainty about the best parameters for these models, few IAMs adequately take uncertainty into account. Often authors do not optimise their models under uncertainty. Instead they incorporate uncertainty outside of the optimisation, leading to qualitatively different results. If they do the incorporate uncertainty, often it is in a static, rather than a dynamic manner. As such, they do not allow optimal policies to adapt when information is learnt about the uncertain parameters. To properly incorporate uncertainty into such models requires making them more complex and difficult to solve. Because of this, model builders need to be selective to ensure that they only incorporate the most pertinent aspects of uncertainty into their model.

I develop a model which considers the possibility of a rapid and large scale change in the climate if a so called ‘climate catastrophe’ occurs. This is something that most existing IAMs fail to address (Keller et al. 2004 and Baranzini et al. 2003 are notable exceptions). The historic record shows that the earth’s climate is prone to rapid flickering changes, repeatedly oscillating in and out ‘ice ages’ for the duration of the historical record (Alley et al., 2003). Current scientific models have a poor understanding of the mechanisms by which this occurs. Because of this, current estimates of the probability of such a catastrophe occurring are limited to those derived from surveys of the subjective opinions of climate experts.

- 2 -

This dissertation attempts to replicated and extend Nordhaus’ DICE Model, one of the most cited IAMs in the literature. In the extension, there is a small risk of a one-off catastrophe occurring in each period. The impact of a catastrophe is a one off permanent 25% reduction in Gross World Product (GWP). The probability of a catastrophe occurring in any period is an increasing function of the atmospheric stock of greenhouse gasses.

The results show that when catastrophic risk is incorporated into the model, there is a more stringent trajectory of greenhouse gas emissions compared to when the risk is excluded. Incorporating catastrophe risk leads to an optimal policy with 45% lower total greenhouse gas emissions. Greenhouse gas emissions peak and are completely halted 35 years earlier. Under the assumptions in my model, failing to incorporating the risk of catastrophes into policy increases the risk of such a catastrophe occurring by 65%. This suggests that the non-stochastic the IAMs which are currently widely used for policy purposes are overstating the socially optimal trajectory of greenhouse gasses.

The remainder of this dissertation proceeds as follows; Section 2 presents an overview of the science of catastrophic climate change, surveys the economic tools available to model and estimate the optimal emission reductions under catastrophic risk and briefly discusses policies to implement such emissions reductions. Section 3 discusses the details of the model and Section 4 presents the results. Section 5 concludes with the implications of my results for current global warming policy.

- 3 -

2. Further Background

This section will give background in key areas to provide a better understanding of the context of the model which I build.

In this section, firstly I will look at the scientific evidence of the risks of a rapid catastrophic climate change. Secondly, I will discuss the role that uncertainty plays in environmental economics and then look at the literature of analytic economic models which deal with the risk of environmental catastrophe. Additional background is given on IAMs, their results and how they have attempted to deal with uncertainty and catastrophe. I discuss Weitzman’s dismal theorem which argues that the uncertainty around climate change is so large that IAMs should focus more on unlikely catastrophes than on the most likely outcome. Finally there is a brief discussion of practical implementation of the emissions reductions prescribed by IAMs.

2.1. Abrupt Climate Change

Paleoclimatic studies show the earth’s climate has historically been highly unstable and prone to rapid shifts. These shifts are best exemplified by the historic oscillations of the earth’s climate into and out of ice ages, which take place over an extended timescale of hundreds or even thousands of years. More recently researchers have found evidence of historic patterns of frequent climate flickering, where the climate can change rapidly over the course of only a few years or decades (Hall and Behl, 2006). The temperature changes during such fluctuations are estimated to be between half and a third as large as the change between the last ice age and today (up to 7oC in mid latitude regions).

This has led many researchers to consider the prospect of ‘abrupt climate change’. The US National Research Council defines abrupt climate change as occurring “when the is forced to cross some threshold, triggering a transition to a new state at a rate determined by the system itself and much faster than the cause” (Alley et al, 2003).

To explain these climate flickers scientists have used a variety of old and new explanations. An old explanation being used to explain flickers is the shutdown of the North Atlantic Thermohaline circulation (Hall and Behl, 2006). This is a deep sea conveyer belt which moves warm water from the tropics up to the higher latitudes and is responsible for warming Europe by several degrees Celsius. A rapid shutdown of this circulation would induce rapid cooling and glaciation throughout Europe and North America. Through a variety of feedback mechanisms, such as greater albedo from increased ice cover reflecting away more radiation, this would lead to widespread cooling around the globe.

A more novel explanation for this process is the so-called “Clathrate Gun Hypothesis” (Kennett et al., 2003). This theory postulates that a small warming of the oceans at an intermediate-depth (400-

- 4 -

1000m) could trigger a rapid release of methane-hydrates stored in the ocean floors. Methane is a powerful greenhouse gas and its release on such a large scale would trigger run away global warming.

To consider the risks of such climate instability, it is important to know firstly exactly how these mechanisms detailed above would occur, secondly the likelihood and possible thresholds for triggering these mechanisms, and thirdly the impact that the following events would have on our climate and economy. There is currently insufficient data and insufficient knowledge to accurately model the underlying processes. As a result, current state of the art scientific models cannot adequately quantify neither the likelihood of these mechanisms occurring nor the consequences of the resulting nonlinear change in climate (Kreigler et al., 2009).

Estimates of the likelihood and impacts of such catastrophic events are limited to surveys of the subjective opinions of experts of these subjects. Lenton et al. (2008) discuss nine key tipping elements in the earth’s climate. They conduct a survey a panel of experts to give the threats a relative ranking of their sensitivity and uncertainty. The survey finds five of the nine elements to be most pertinent. Of these five the melting of the Greenland Ice Sheet is evaluated to be the most sensitive and least uncertain. The experts assess the collapse of the Thermohaline circulation to be the least sensitive but in the middle in terms of uncertainty. The impact of global warming on the West Antarctic Ice Sheet, Amazon Rainforest and The El Nino Southern Oscillation were ranked the most uncertain.

Kriegler et al. (2009) similarly use subjective estimates of a panel of experts to estimate quantitative probabilities of five different tipping points (including most of the ones mentioned above). The experts also predicted the interaction between the tipping points (for example, how much more likely would rapid dieback of the Amazon be if El Nino patterns were to suddenly intensify). Kreigler et al. used this to calculate cumulative probabilities of these catastrophes occurring.

The data from both surveys show a very large range of opinions and little agreement within the panels. Because their methods aggregate the subjective opinions of experts rather than peer reviewed models, these estimates may be subject biases, or other systematic errors. For example, an expert who spends much of his time thinking about a certain scenario in the absence of objective data may biasedly believe that this scenario is more likely to happen than it really is.

Kriegler et al. estimate that the joint probability of at least one of their tipping points being reached is at least a 0.16 percent on a moderate emissions trajectory and a 0.56 percent chance if we stay on a high emissions trajectory. These estimates are used as the basis for the catastrophe probability function constructed later in this dissertation.

A final issue to consider for the purpose of this dissertation, is whether abrupt climate change is a tipping point phenomenon where there is a non-random but unknown threshold at which a catastrophe

- 5 - will occur, or whether it is a stochastic phenomenon where a catastrophe is possible at any time however it is more likely the more we perturb the climate.

If we consider a relatively certain tipping element such as the melting of the Greenland Ice sheet, then it would make sense to look at this as a tipping point phenomenon. Although the mechanics of ice sheet melt are not well understood, the size and physical mechanics of ice sheet melt suggest that there will be some critical threshold after which a catastrophe becomes inevitable.

However if we look at the scientific literature of ‘climate flickering’ there appears to be no set threshold for any observable variable at which the climate frequently changes (Hall and Behl, 2006). Changes in historic climate appear to be unpredictable and chaotic in nature. It is for this reason I have chosen to model the climate catastrophes as a random process as opposed to one with an unknown threshold.

2.2. Uncertainty, Irreversibility and Option values

As we have seen, the science of global warming is full of uncertainty, not only about the incremental costs of global warming but also about the risks of random flickers in climate. In addition to these uncertainties two more uncertainties should be considered (Heal and Kristrom, 2003). Firstly, there is economic uncertainty; the relationship between the environmental impacts of climate change and human welfare. As an example, there is much economic uncertainty about the impact of sea level rise on patterns of migration, production and human welfare in coastal regions. Secondly there is policy uncertainty, that is to say the uncertainty between any given policy instrument and its outcomes. For example what level of carbon tax would be required to reduce emissions by 5 or 10 or 20 percent?

Uncertainty has several implications for IAMs. Firstly the distribution of many of the parameters associated with climate change is asymmetric, there are long heavy tails in the sense that there is a small probability of something highly costly happening. Because of this, the expected cost of a policy may differ greatly from the most likely cost. In the case of global warming where there is a large tail of unlikely but highly damaging policies. This would imply higher expected costs of global warming and so a more conservative emissions policy. This is important for analysing IAMs which are often parameterised by such best estimates rather than expected values.

Secondly, it is usually assumed that society is risk averse, and as such is willing to pay a risk premium to avoid uncertain outcomes even if the expected costs are the same. This is important because the uncertainty about the impacts of climate change grows as carbon stock increases, so we should be willing to pay more to avoid climate change than expected damages alone would imply.

- 6 -

Thirdly, uncertainty can combine with irreversibility (or temporal rigidity) to give rise to option values. It takes a long time for carbon to cycle through the atmosphere. Under Nordhaus’ specifications a perturbation in atmospheric carbon will take over 500 years to die away (Joos et al., 1999). As such the impact of emissions is irreversible in the short term (unless we can develop technologies to sequester atmospheric CO2).

To consider the impact of irreversibility, assume there is stock good with a non-random threshold above which a catastrophe will occur. If a catastrophe is accidentally triggered and the stock is reversible, then it can be reduced back below the threshold and society will only have to pay the costs of a one off catastrophe for a short period. However, if we assume that this catastrophe is irreversible, then the catastrophe will impose on-going costs indefinitely into the future.

If there is no uncertainty, then irreversibility is not important because an optimal decision can be made a priori. But in the presence of uncertainty a more conservative approach is optimal than in the presence of perfect information. This result suggests a kind of ‘precautionary principle’ where exposure should be reduced to risks that may later be regretted. The difference between the cost of the optimal policy with and without uncertainty is called the real option value.

One of the benefits of incorporating randomness into models like IAMs which are dynamic over time is that they can account for this option value since they find the optimal policy with respect to all possible future costs.

There is also a second option value associated with climate change which IAMs are weaker at modelling. This arises because sunk investments in technology and capital to abate emissions must be made today, but the true impact of climate change is uncertain. If over time more is learnt about the true impacts of climate change, then there is an option value associated with postponing investment in abatements until there is certainty about their necessity (later we see this option value captured in Guillermient and Tol (2008)’s model).

2.3. Analytic Economic Models of Environmental Catastrophe

There is a large literature of analytic economic models which use optimal control theory to characterise policy for controlling pollution flows. Heal and Krisrtom (2007) survey this literature. Some of these models consider the impacts of catastrophic risk on optimal pollution policies. I will review two such models, Clarke and Reed (1994) and Tsur and Zemel (1994). These two models illustrate the difference between viewing environmental catastrophe risk as a tipping point phenomena or a stochastic process.

- 7 -

In both models, the total stock of an emitted pollutant causes damage in two ways. Firstly, there is a known continuous damage where increased stock of pollutants causes increased environmental damage. Secondly, both models include environmental catastrophes which if realised will permanently reduce economic welfare. In Clarke and Reed there is a small probability that a catastrophe will occur at any time and this probability increases as the pollution stock increases. In Tsur and Zemel the catastrophe is non-stochastic and has an unknown threshold at which an environmental catastrophe will occur if it is ever exceeded.

In Clarke and Reed the optimal policy contains a unique equilibrium stock of pollutants (in both these models there is natural sequestration of pollutants so an equilibrium stock does not imply zero emissions). This equilibrium stock will be lower in the presence of catastrophic risk if some increasing function of both the marginal hazard rate of increasing pollution stock and the consequences of a catastrophe are sufficiently large.

On the other hand, in Tsur and Zemel the equilibrium level of pollution stock is an interval rather than a unique value. The upper bound of this interval is the pollution stock which is optimal in the absence of catastrophic risk. If pollution stock starts below this interval, the pollution stock will increase until it reaches the bottom of this interval. If pollution starts inside the interval, then the optimal policy will leave pollution stock unchanged. If the pollution stock starts above the interval, pollution will decrease until it reaches the top of this interval.

The intuition behind this is that normally it would be optimal to increase the pollution stock if the marginal benefits outweigh the expected marginal costs (both in terms of marginal climate damage and marginal risk of catastrophe). However if we have previously experienced a stock at a particular level above this point and no catastrophe has occurred, then we have learnt that the catastrophe threshold must lie above this point. As a result, if pollution stock is below its highest historical level, the marginal costs of increasing the stock are reduced (since there is no risk of catastrophe). If pollution stock is above its highest historical level, the marginal costs of increasing the stock are increased (as all of the probability has been pushed into the tail of the distribution above this point). Because of this, different policies may be optimal depending on the pollution history.

However, if the risk is truly stochastic in nature (as in Clarke and Reed), then just because a catastrophe did not occur when a certain level of pollution was experienced in the past, does not mean that a catastrophe will not happen if we do the same thing again.

This example shows that if the climate is non-stochastic and uncertain then there is potential to learn about the risks involved and better manage them. In this example the learning come from historical experience of the climate, but could also come from other sources such as scientific research (as we

- 8 - will see in Weitzman(2008)). On the other hand if the climate is truly stochastic then the ability to learn is more limited.

Unfortunately, the usefulness of analytic models is limited by the simplifying assumptions necessary in order to be tractable. These models do not fully account for the opportunity costs of abatement in terms of capital accumulation and future growth. To model these dynamics, such models were combined with macroeconomic growth models in what is called ‘Integrated Assessment’. However, because these models are much more complex they cannot be solved analytically.

1.1. Uncertainty in Integrated Assessment Modelling

Computational IAMs allow for a greater degree of detail and realism to be incorporated into economic models. Multi-equation models of the climate and the economy can be used to make the simulations more similar to the real world. However, having a more complex model means there are more assumptions. As we make more assumptions, the model becomes more complex and it often becomes more opaque. In a complex model, it is often unclear what elements of the model are important, what the results depend on and how the various assumptions in the model interact.

Broadly speaking, there are two types of assumptions in IAMs. Firstly, there are structural assumptions, they are the equations which govern the model. I will discuss this in further detail when I outline my model in section 3. Secondly, there is the choice of the parameters in these equations. The choice of parameters in IAMs is a contentious issue. IAMs are highly sensitive to which parameters which are chosen. Further, many of the parameters used in such a simulation are profoundly uncertain.

Many of the geophysical parameters can only be imperfectly measured. They may also pertain to mechanisms which are not scientifically well understood. Other parameters, such as levels of risk aversion and social discount rates refer to social values which differ around the world. Currently, there is no agreed way to aggregate these social preferences across societies. Some parameters, such as productivity growth, make assumptions about variables hundreds of years into the future which cannot be meaningfully estimated. Lastly, many of the parameters themselves may be endogenous to the choice of climate policy. For example, changes over time in the cost of abatement or the emissions intensity of output depend strongly on the incentives to develop innovation in these areas.

It is important that parameters are chosen as accurately as possible in an objective and transparent manner. Often, the policy recommendation that arises from these models can be completely changed with a small tweak of the parameters.

- 9 -

For example, compare the outcomes of the IAMs presented by Nordhaus (2008) and Stern (2007). The difference between policy recommendations between these two authors is huge. Stern’s optimal policy advocates abating 60% of emissions by 2050, whereas Nordhaus’ policy only abates 25%. Both authors use similar IAMs. This large difference arises because of differences in two key parameters, the discount rate and the coefficient of risk aversion. I discuss these choices in more detail in Section 3.1.

Nordhaus (2008) and many IAMs (but notably not Stern (2007)), come to the key conclusion that the optimal policy will involves a so called ‘policy ramp’. That is to say, that the optimal policy will involve modest cuts in greenhouse gas emissions in the short term, followed by more drastic ones in the medium and long term (Nordhaus, 2008). The intuition behind this is that abating emissions today will have a larger opportunity cost since it will cost us a larger proportion of our output and prevent us from investing in capital which will help us grow in the future. Whereas in the future when society is richer, paying for emissions abatements is relatively less onerous. This ‘policy ramp’ is in contrast to the policies advocated by many environmentalists and scientists who advocate sharp cuts in emissions today, failing to consider the opportunity cost of abating greenhouse gasses.

IAMs currently used by economists are becoming increasingly complicated. I base my model on one of the simplest IAMs, Nordhaus’ DICE-2007 model. Other IAMs often include extra details in their models such as regional disaggregation (RICE-2010 Model, Nordhaus 2010), details of Industrial structures (Kemfert 2002), or detailed modelling of each individual cost of posed by climate change (FUND Model, Tol 2003). These models may be useful for studying or clarifying individual issues such as international negotiations, industry policies or the distribution of the impacts of climate change. However, it is important that big picture policy models such as DICE be parsimonious, so that it is clear what parts of the model are driving the conclusions and policy outcomes.

Usually, the analysis of uncertainty in many IAMs is limited to simulating different scenarios and using sensitivity analysis. Sensitivity analysis involves testing the model with different parameter values and observing how sensitive the optimal solution is to these changes. Some authors such as Stern (2006) and Hope (2002) try to account for uncertainty using Monte Carlo analysis. They run their optimisations independently very many times, each time with different parameters and generate a continuous distribution of outcomes. Although this technique accounts for some of the impact of uncertainty, it cannot account for all of it. For example, since each optimisation runs independently with different fixed parameter, it does not account for the impact of risk aversion.

Assessing the impact of risk aversion involves optimising the expectation over many scenarios. This is the approach that Tol’s FUND model takes. It generates a smaller number of realisations and then finds an emissions trajectory which maximises the weighted average utility over all of the samples.

- 10 -

This approach accounts for risk aversion. However, its weakness is that it is not dynamic. It cannot account for the ability of agents over time to learn or infer information from past climate damages which realisations of the parameters are more likely. If agents can learn over time, then they can adjust their behaviour accordingly improving future welfare.

Another problem with these models is that they only consider uncertainly in terms of a continuous distribution of outcomes. There is no discrete probability of a climate catastrophe occurring. There are some models which do incorporate explicit catastrophe risks into their models. Keller et al. (2004) incorporate a threshold for overturning the Thermohaline circulation in an older version of Nordhaus’ DICE model. Guillerminet and Tol (2008) produce a decision tree model, where every period a decision is made regarding whether to undertake a drastic regime of greenhouse gas abatements. The structure of this model effectively captures the learning and dynamic decision making process associated with a catastrophic tipping point. Both models reach similar conclusions that the existence of catastrophic risk significantly brings forward the timing of the optimal policy (in Keller et al. it also increases policy intensity). They found this was especially true when the parameters controlling the likelihood and impact of the catastrophe are large or the agent is particularly risk averse. Guillerminet and Tol also find that once a catastrophe becomes inevitable, the incentive to reduce emissions is significantly reduced.

Both of the previous models include non-stochastic but uncertain tipping points rather than truly stochastic catastrophes. On the other hand, Baranzini et al. (2003) model stochastic catastrophes. Baranzini et al. produce a real option model to estimate the benefits of undertaking global warming policy. They model catastrophes as a Poisson process. One weakness of their approach is that they do not link the probability of catastrophe to the degree of global warming. They simply say that if no action is taken then there will be random catastrophes and once action is undertaken there will not. Baranzini et al. estimate that if catastrophes (with $100 billion dollar annual expected cost) are modelled, then there is a 72% probability that the expected benefits of undertaking emissions mitigation will exceed the expected costs. If such catastrophes are excluded then there is only a 16% change that the expected costs will exceed the expected benefits.

1.2. Fat Tailed Uncertainty and the Dismal Theorem

Weitzman (2009) argues the uncertainty in the tails of the probability distributions which parameterise IAMs can be so large that it dominates all other aspects of the analysis. He demonstrates this with what he calls ‘The Dismal Theorem’ which I will briefly summarise:

He begins with a two period model. In between these two periods there are climate related damages which will impact on welfare. This damage has a known distribution, but unknown parameters. Starting from non-informative priors beliefs, we can learn about these parameters by conducting

- 11 - scientific studies and updating our beliefs in a Bayesian manner. The rate of learning is slow and so the meta-distribution of the damages is fat tailed (this is the distribution of the damages after the posterior distribution of the unknown parameter is incorporated). The expected disutility of the uncertain damage becomes arbitrarily large, because the utility tends to negative infinity when consumption is near zero (for a constant relative risk aversion utility function).

The implication is that under these conditions there is an unbounded willingness to pay to avoid the risk of these climatic damages.

Weitzman’s model combines the two types of uncertainty I have been discussing. There is a stochastic element in the sense that the climate damage between the two periods is randomly distributed. There is also a non-stochastic but unknown element, the parameters which govern the distribution of the climate damages. What Weitzman shows is that we can never learn away the non-stochastic parameter fast enough to keep the variance of the stochastic damage finite.

Nordhaus (2009) criticises Weitzman’s paper because it leans heavily on the asymptotic behaviour of constant relative risk aversion utility functions as consumption goes to zero. Nordhaus argues that this is more a convenient assumption than realistic description of societies’ attitudes to risk. Further, he argues that the theorem can only be applied in situations where there is actually a risk that consumption will be near zero. He discusses how catastrophic climate change may not be one of these cases because even under the most catastrophic scenario there no risk of extinction.

Weitzman argues that the implication of the Dismal Theorem is that there should be a burden of proof on those who make policy prescriptions using IAMs to show that this tail behaviour is not relevant to their policy conclusions. This is one of my motivations for building an IAM which incorporates the risk of catastrophes.

Anthoff and Tol (2011) discuss the implications of the Dismal Theorem on their particular integrated assessment model FUND. In FUND there are 150 parameters which are randomly generated during Monte Carlo analysis. Randomly generating all these parameters, they cannot statistically reject the hypothesis that distributions of the damages in FUND are fat tailed. In their model this can lead to situations where the effective discount rate for climate change is negative and the problem is unbounded. Anthoff and Tol argue that the Dismal Theorem implies that another welfare criterion is needed. They re-solve the FUND model with an objective to minimise the thicknesses of the tail of the distribution of consumption (a ‘minimax’ policy). The optimal policy he finds when he minimises tail thickness is a similar order of magnitude to when its objective is to maximise consumption. As such he argues the dismal theorem does not destroy the applicability of IAMs to climate change.

- 12 -

1.3. Implementing Emissions Abatements

Most IAMs assume a benign central planner who can perfectly control the world’s emissions and does so to maximise global welfare in a utilitarian manner. In practice, of course, this is not how the world works. This is demonstrated by global policy makers who failed to come to a binding agreement to limit the world’s greenhouse gasses in Copenhagen and look unlikely to do so in the near future.

In this regard, it is important to see the limitations of integrated assessment. IAMs should be seen as a baseline against which global policies can be compared. Evidence from regionalised IAMs (Tol 2002, Nordhaus 2010) suggests that climate change will impact different countries in different ways. Those who are the most at risk are often not those who contribute the most to global emissions. This has lead Tol (2002) to comment that climate change is “essentially a problem of distributional justice”. IAM find the most efficient trajectory of emissions. They cannot find policies which are feasible for all parties at international negotiations. They also cannot tell us what polices are philosophically just or unjust. Yet today these are the largest issues currently standing in the way of a global agreement on greenhouse emissions.

A different policy problem which IAMs cannot settle is the issue of deciding which policies are best to reduce greenhouse gas emissions within nation states. Most of the economic debate in this area centres on Weitzman’s (1974) theory of prices versus quantities. In the absence of uncertainty, it is equivalent to regulate pollution by either setting a price on emissions or a cap on their quantities. However, when policy makers are uncertain about firms’ costs of abatement, then prices regulation is preferred to quantities regulation if the slope of the marginal cost of abatement is greater than the slope of the marginal benefits. In the case of climate change the marginal benefits of abatements are relatively flat because greenhouse gas is a stock good, and a marginal emission makes very little difference to the total stock. This would seem to suggest that prices (i.e. a carbon tax) are a better policy to quantities (a tradable rights scheme). This result has been confirmed using simulations of a modified DICE model (Pizer, 2003).

In practice this argument is relatively academic. In many countries (including New Zealand), cap and trade schemes have been implemented after a carbon taxes proved politically untenable. Subsequent research has showed that the advantage of prices over quantities can be minimised by using a hybrid tradable rights scheme with a cap on prices (Weitzman, 1978), or by using a scheme where emission rights are bankable over time (Fell et al, 2009).

Other countries such as the United States, neither a carbon tax nor a tradable rights scheme are politically viable. As such, regulation and subsidies are the only tools remaining to policymakers. Performance Standards and Research and Development subsidies are generally spurned by economists as being inefficient ways to deal with such problems. This is because it assumes that policy makers

- 13 - have perfect information about the costs of inventors and firms. Stern (2006) however, argues that such policies are vital in transitioning towards a low carbon economy because they can target specific areas to drive innovation. Stern argues this is important because of the path dependence of technological change, an issues that economic models have trouble quantifying.

- 14 -

2. Model

This section gives the details of a computational IAM. In this IAM, every period there is a small probability of a catastrophe occurring, which will cause a large reduction in output (GWP) in every period after the catastrophe occurs. The probability of the catastrophe occurring is an increasing function of greenhouse gas stocks. Once a catastrophe has occurred there is no further chance of another catastrophe occurring.

I will present the model in four steps, as this the simplest way to show how the model works. It is also the order in which the model was developed, starting simple and building it more complex.

Firstly, I start with a basic Ramsey economic growth model on which the IAM is based. Secondly, I add onto this model a climate sector in which production of output causes greenhouse gas emissions. These emissions raise global temperature, which causes economic damage, which will in turn reduce future output. Thirdly I add an abatement industry where policy makers choose to spend a fraction of their output on abating their emissions reducing future global warming.

The first three parts of my model are based on the description of the DICE-2007 model given in A question of balance (Nordhaus, 2008), which I have followed closely (except in a few places where noted).

Finally, the novel part of the model is a random tipping point risk. There are many states of the world in each period with and without climate catastrophes. The transition probabilities between these states are governed by the concentration of atmospheric greenhouse gasses.

2.1. Ramsey Growth

We start with a basic Ramsey growth model. There is a single global decision maker who must choose the saving rate �(�) in each period into the future to maximise the present value of an individual’s future utility �(�). The saving rate �(�) is the fraction of income that is invested into future capital rather than consumed in every period. Thus investment �(�) = �(�)�(�).

The utility �(�) of a representative member of society depends on individual consumption �(�) = �(�)/�(�) in that period and is given by a Constant Relative Risk Aversion (CRRA) Von-Neumann

() utility function �(�(�)) = + 1.

Output �(�) is given by a Cobb-Douglas production function which depends on total factor productivity (TFP) �(�), the world population �(�), and capital �(�). Changes in capital between

- 15 - periods are governed by a capital accumulation equation � = � + (1 − �)� where � is an exogenous depreciation rate.

The full problem can be written as follows

max �(�(�)) (1 + �)

��. �(�) = �(�)�(�)�(�) = �(�) + �(�)

�(�) = �(�) + (1 − �)�(� − 1)

The growth rates of population and productivity are exogenous. Consistent with Nordhaus, I assume that population grows asymptotically from the current population of 6.437 (billion people) to its maximum of 8.7 after 2700.

However, unlike Nordhaus, I assume that TFP grows at a constant exogenous rate of 1% per year. Nordhaus assumes that TFP declines over time. I did not adopt his assumption because I did not understand the reasoning behind it.

The social rate of time preference � measures how much weight is put on periods in the future relative to periods today. The coefficient of risk aversion � measures the marginal willingness to trade off consumption between different states of the world and across time. These are two of the most of ambiguous and sensitive parameters in the model. I follow Nordhaus (2007), who argues that these parameters should revealed by marketed interest rates. Nordhaus uses the steady state “Ramsey equation” �∗ = � + ��∗ to choose the parameters � and � which give the best fit to the relationship between market interest rates �∗ and economic growth rates �∗. I use Nordhaus’ fitted values � = 1.5% and � = 2

Stern (2006) on the other hand, argues from a philosophical standpoint that it is immoral to value the welfare of future generations less than ourselves (which would imply zero time preference). He justifies a positive time preference of � = 0.1% only because of the risk of human extinction. The difference between these two discount rates implies that Nordhaus would value 100 dollars’ worth of damage in 100 years’ time at 23 dollars in present value. Stern would value it at 90 dollars in present value. Stern is willing to pay a lot more today to prevent damage in the future.

Stern chooses the coefficient of risk aversion � = 1 corresponding to log utility �(�(�)) = log(�(�)). He justifies this only by saying “it is essentially a value judgment” (Stern 2006). The difference the two risk aversion coefficients have on the model can be seen later in the dissertation in Figure 14. This assumption also implies that Stern is willing to pay more today than Nordhaus to prevent future

- 16 - damages. These two assumptions drive most of the differences in conclusions and policy recommendations between Stern (2006) and Nordhaus (2008).

2.2. Global Warming Module

The basic Ramsey model is extended by adding a climate sector. For every unit of output �(�) produced, some quantity of greenhouse gas emissions �(�) are released into the atmosphere where �(�) = ��(�) + �. The parameter � controls the emissions intensity of output and the parameter � is the amount of external emissions which are independent of output.

In Nordhaus’ DICE-2007 model, the emissions intensity of output falls over time. I choose to keep the emissions intensity of output constant of time. Although this assumption is surely false, the degree to which emissions intensity of output will fall is highly uncertain, and indeed probably endogenous to the question of optimal climate policy itself. I made this assumption because by steeply reducing the emissions intensity of output to zero over the next hundred years, as Nordhaus’ model does, he assumes away the central problem of the model, how best to manage the problem of climate change in the distant future.

To model the , I use a simplified version of the general circulation models that are used by many climate scientists. This simplified model has three carbon stores: the atmosphere, the biosphere, and the oceanic store. The atmosphere is the store we are most concerned with. When greenhouse gas is emitted, it enters the atmospheric store. The atmospheric store is also what generates the ‘greenhouse effect’ which drives global warming.

The other two stores act as sinks slowly drawing carbon away from the atmospheric store until all three stores are in equilibrium. The biosphere represents all of the carbon stored near the earth’s surface either in plant matter, soil, or in the top layer of the ocean. The Oceanic store represents carbon trapped deep in the ocean where the carbon cycle moves very slowly.

The greenhouse gas stocks in these stores are governed by the equations below. This is also shown in

Figure 1. The parameters �, control the flows between carbon stores. Nordhaus calibrated these flow rates results from the MAGICC2007 Climate and Circulation Model, and I have adopted his results (Wigley, 2008 and Nordhaus, 2007).

�(�) = �(�) + �,�(� − 1) + �,�(� − 1)

�(�) = �,�(� − 1) + �,�(� − 1) + �,�(� − 1)

�(�) = �,�(� − 1) + �,�(� − 1)

- 17 -

Figure 1 How the carbon cycle is modeled

The atmospheric greenhouse gas stock absorbs thermal radiation released from the earth’s surface and reradiates it back down towards the earth. As a result, thermal radiation spends longer in the earth’s atmosphere before it escapes and transfers more of its energy to the earth than it otherwise would. Because of this, the earth’s temperature is warmer than it would otherwise be in absence of these gasses. This is why increasing the atmospheric stock of greenhouse gasses will lead to an increase in global average temperature.

To model this ‘greenhouse effect’ we say that the radiative forcing �(�) of the earth’s atmosphere is an increasing function of our stock of atmospheric greenhouse gases �(�). This radiative forcing then flows on increasing the global average atmospheric temperature.

() �(�) = � log + �.

�(�) = ��(� − 1) + ��(�)

The following relationships were also calibrated by Nordhaus from the MAGICC 2007 Climate and Circulation Model (Wigley, 2008 and Nordhaus, 2007). However I have made some simplifications, in DICE there are is an additional term in the temperature equation modelling the heat flow between the atmosphere and the ocean.

An equilibrium temperature is reached when the stock of emissions reach a steady state M(t) = M(t-1). When this occurs, �(�) = �(� − 1) and so �(�) = �(� − 1). The new equilibrium temperature will be �∗ = �∗.

It is then argued that the change in temperature will lead to reduced output. This could be because of some combination of sea level rise, reduce agricultural output, increased frequency and costs of disease or natural disasters or one of many other things. For a full review of the economic impact see the 4th IPCC’s assessment report "Impacts, Adaptation and Vulnerability" (IPCC, 2007b).

- 18 -

The damage function is a factor by which the production function is multiplied to adjust output for the damages of climate change. The damage function is a decreasing function of temperature. I use a function of the form �(�) = . ()

The choice of the damage function is a controversial aspect of integrated assessment modelling. I adopt Nordhaus’ convention and consider a quadratic damage function (� = 2). Weitzman (2008) argues that � = 2 does not have enough curvature to accurately describe the huge impact on human welfare that climate change will have for very large temperature increases (greater than 4oC).

Weitzman argues that the damage function should be a quartic (� = 4) or an exponential (�(�) = ). Pizer(2003) finds that DICE produces dramatically more conservative results when the () quadratic damage function (� = 2) is replaced with a quartic (� = 4).

Combining all of these new aspects into our model our new optimisation problem is given below.

max �(�(�)) (1 + �) ��. �(�) = �(�)�(�)�(�) = �(�) + �(�)

�(�) = �(�) + (1 − �)�(� − 1)

�(�) = ��(�) + �

�(�) = �(�) + �,�(� − 1) + �,�(� − 1)

�(�) = �,�(� − 1) + �,�(� − 1) + �,�(� − 1)

�(�) = �,�(� − 1) + �,�(� − 1) �(�) �(�) = � log + � �

�(�) = ��(� − 1) + ��(�) 1 � = 1 + ��

2.3. Greenhouse Gas Abatements

Next we allow the central planner to take action to reduce the damage of climate change by allowing the possibility of abating greenhouse gas emissions to reduce global warming. The planner now maximises the present value of future utility by choosing not only the fraction of output to save �(�), but also the fraction of output to spend on abating greenhouse gasses �(�). Damage adjusted output is split between three things, consumption, investments and abatement spending Λ(�).

�(�)�(�) = �(�) + �(�) + Λ(�) where Λ(�) = �(�)�(�)�(�)

- 19 -

Spending an amount Λ(�) on greenhouse gas abatements will allow a certain fraction (�) of

greenhouse gas emissions to be abated given by the relationship Λ(�) = �(�)µ(�) . The amount of greenhouse gas emissions after abatements is given by �(�) = �(1 − �(�))�(�) + �.

The parameters �(�) and �control the cost of abating emissions. The selection of these parameters is also controversial. Assumptions about the cost of abating greenhouse gasses far into the future must necessarily make large assumptions about the level of technology. In DICE, the coefficient �(�) falls over time at a rate endogenously determined by returns to capital. To keep my model simple, initially I assumed constant abatement costs, however this did not yield satisfactory results. I changed the model so that abatement costs �(�) fall by 1% per year.

So we can now restate below the complete optimisation model where the agent now chooses two things, savings and abatements in each period.

max ��(�) (1 + �) , ��. �(�)�(�) = �(�)�(�)�(�) �(�) = �(�) + �(�) + Λ(�)

�(�) = �(�) + (1 − �)�(� − 1)

�(�) = �(1 − �(�))�(�) + �

�(�) = �(�) + �,�(� − 1) + �,�(� − 1)

�(�) = �,�(� − 1) + �,�(� − 1) + �,�(� − 1)

�(�) = �,�(� − 1) + �,�(� − 1) �(�) �(�) = � log + � �

�(�) = ��(� − 1) + ��(�) 1 �(�) = 1 + ��(�)

Λ(�) = �(�)�(�)

2.4. Stochastic Catastrophe Risk

Now we introduce a stochastic risk of a climate catastrophe. There are multiple states of the world (denoted by the index �) in every period of time �. Specifically in the � time period there will be one state of the world in which no catastrophe has occurred yet, this is denoted by the index � = 0. In addition to this there are t-1 more states of the world for � = 1. . . (� − 1) where a catastrophe has occurred in the past period in the � period.

All of the variables presented in the last section now have two indices (� and �) to keep track of which state they are in.

- 20 -

In every period of time � where no catastrophe has occurred we can denote the state (�, 0). In this state there is a small probability of a catastrophe occurring. If a catastrophe occurs then the state in the next period will be (� + 1, �), if a catastrophe does not occur then state in the next period will be (� + 1,0). A catastrophe occurs only once and if a catastrophe has already occurred then there is no further risk of another catastrophe occurring in the future. So if in (�, �) where a catastrophe has already occurred (d>0), then the state in the next period will be (� + 1, �).

This is not necessarily a realistic assumption. Kreigler et al. (2007) point out that there are multiple tipping elements and these are not necessarily independent. If one tipping element collapses then it may make others more likely also collapse in the future. I have assumed a one off tipping risk because it makes the model simpler to compute and solve. I have attempted to keep the mechanics in this section as simple as possible rather than trying to accurately model what might occur in a catastrophe. Figure 2 shows how the possible states of the world grow over time. The red arrows indicate the occurrence of a catastrophe.

Figure 2 The growth of the set of states over time

In this model, the impact of a catastrophe is to permanently reduce all future output by some factor �. This is modelled by changing the damage function to as below:

1 ⎧ �� � = 0 ⎪1 + ��(�, �) �(�, �) = ⎨ 1 − � ⎪ �� � > 0 ⎩1 + ��(�, �)

This is not necessarily a realistic assumption. One would not expect the damages to be distributed evenly over time. One might also expect a time lag between when a catastrophe becomes inevitable and when we start observing the impacts, especially with such a large slow moving system as the earth’s climate. However as previously noted, recent research into rapid, flickering climate changes make this assumptions seem not completely implausible (Hall and Behl, 2006).

- 21 -

I choose the magnitude of the catastrophe to be � = 25% of GDP. I based this figure out of work in Nordhaus (2000) estimating the willingness to pay to reduce catastrophic risk. Nordhaus obtains his figure from a survey of experts. The size of this catastrophe is larger than others in the literature. Stern (2006) estimates a 10% change of a 5-25% loss of GDP. Keller et al. uses a figure around 10%. Guillerminet and Tol (2008) and Barabzini et al. (2003) use a figure closer to 2%. Clearly there is little agreement in the literature about what the appropriate size of a catastrophe should be. Further research needs to be done in this area.

The probability of transitioning from a no catastrophe state into a catastrophe is an endogenously determined as a function of the atmospheric gas stock. Thus, if no catastrophe has occurred yet, there are two incentives to abate emissions. The first is to reduce future climate damage and the second to reduce the probability of a future climate catastrophe. Because there are no scientific models describing how such a climate catastrophe might occur, it is difficult to say exactly what the function controlling the probability of a catastrophe should be. I used the following exponential function to describe the relationship between catastrophe probability φ(t) and temperature �(�, 0).

(,) φ(t) = �� + γ

I chose an exponential function because it captures intuition that the likelihood of catastrophe should increase very rapidly with temperature. To assess parameters of this function I turn back to the expert survey of scientific opinion presented in Kreigler et al. (2008). In this survey they ask experts’ their opinions about the probability various catastrophes occurring. Compiling their data, they estimate a ‘conservative lower bound’ 16% probability of at least one catastrophe occurring under a medium temperature corridor and a 56% probability of at least one catastrophe under a high temperature corridor before 2200.

Figure 3: Medium and High Temperature Corridor from Kreigler et al. (2008)

I fit the parameters γ, γ, γ that give a cumulative catastrophe probability closest to the figures predicted mentioned above. To do this I assume the middle of the emission corridors shown above in

Figure 3. The best fitting parameters are γ = 0.00081, γ = 0.54, γ = 0.00034. The spreadsheet with these results is attached in Appendix IV.

- 22 -

These assumptions mean that the climate is stochastic in nature, rather than there just being a fixed, unknown tipping point. Even if temperature is stagnant or falling, there is still a positive probability of a catastrophe occurring if one has not occurred yet.

Given the transition probabilities φ(i) we can calculate the probability Φ(t, d) of being in state � at a given time �. This is done recursively as the product of the transition probabilities.

⎧ (1 − φ(i)) � = 0 Φ(t, d) = ⎨ φ(d) (1 − φ(i)) � > 0 ⎩

Each state probability Φ(t, d) is multiplied by the corresponding utility the objective function. This calculates the expected utility in every period and maximises the present value of expected future utility.

The full model can be written. The code that implements this model can be found in Appendix I.

max ��(�, �) (1 + �)Φ(�, �) ,

��. �(�, �)�(�, �) = �(�, �)�(�, �)�(�, �)�(�, �) = �(�, �) + �(�, �) + Λ(�, �)

�(�, �) + (1 − � )�(� − 1, �) �� � ≠ (� − 1) �(�, �) = �(�, �) + (1 − �)�(� − 1,0) �� � = (� − 1)

�(�, �) = �(1 − �(�, �))�(�, �) + �

�(�, �) + �,�(� − 1, �) + �,�(� − 1, �) �� � ≠ (� − 1) �(�, �) = �(�, �) + �,�(� − 1,0) + �,�(� − 1,0) �� � = (� − 1)

�,�(� − 1, �) + �,�(� − 1, �) + �,�(� − 1, �) �� � ≠ (� − 1) �(�, �) = �,�(� − 1,0) + �,�(� − 1,0) + �,�(� − 1,0) �� � = (� − 1)

�,�(� − 1, �) + �,�(� − 1, �) �� � ≠ (� − 1) �(�, �) = �,�(� − 1,0) + �,�(� − 1,0) �� � = (� − 1) �(�, �) �(�, �) = � log + � � � �(� − 1, �) + � �(�, �) �� � ≠ (� − 1) �(�, �) = ��(� − 1,0) + ��(�, �) �� � = (� − 1)

(,) �� � = 0 �(�, �) = �� � > 0 (,)

- 23 -

Λ(�, �) = �(�, �)�(�, �)

(,) φ(t) = �� + γ

⎧ (1 − φ(i)) � = 0 Φ(t, d) = ⎨ φ(d) (1 − φ(i)) � > 0 ⎩

2.5. Parameterisation, Coding and Implementation

The parameters in my model are largely the same as in Nordhaus’ DICE-2007. A table of these parameters can be found in Appendix III. All the parameters that are important or that differ in my model from Nordhaus have been discussed in the model section. Other parameters are the same as Nordhaus and the justification for these parameters can be found in Nordhaus’ the technical notes on his website (Nordhaus, 2007).

The model was coded in AMPL, a language for mathematical optimisation models and solved using Conopt, a commercial solver for non-linear programs. This data file containing the parameter is attached in Appendix II. Because I did not have a full license for the Conopt’s software, I solved my models in two ways; firstly using a free trial of the software on the Energy Centre computer server. Secondly I solved the model on the online NEOS Optimization server. NEOS lets users submit problems online to its server via its website, solves the problem and then emails the results back to the user. The benefit of using NEOS is that it provides free public access to a large range of commercial optimisation packages (such as Conopt).

The first step in developing my model was to write a standard IAM model as described in the previous section. To ensure the model worked, I validated my results by writing an identical version of the problem in Microsoft Excel using the solver add-in and generating identical results. I could not use Excel for implementing my final model because solver has a limit of 100 variables and constraints.

When I wrote my model, I initially did so with a principle of parsimony. I added as a fewer equations as possible and tried to keep the parameters as simple and transparent. As such I made many highly unrealistic assumptions (for example a single carbon store, constant population and productivity growth, constant abatement costs). I found that this model was unable to get realistic results.

After this I repeatedly added additional equations and dynamic parameters to my model, trying to match my results with standard IAMs. Eventually this led me to converge on Nordhaus’ DICE model. Unfortunately, I did not exactly replicate his model so our results are not comparable. If I were to start

- 24 - this work over again I would begin with the Nordhaus’ DICE-2007 model rather than building my own from scratch. This way my results would be directly comparable to his (and I would have saved a lot of time).

A major limitation of the model’s stochastic extension is that the number of states (and hence constraints) in the problem increases quadratically with the number of periods. This means the number of states becomes big very quickly. To solve the model with 70 time periods requires 2450 states. This limits the size of problem that can be solved within a reasonable period of time. For all of my models I have chosen to use 70 periods each of 5 years duration simulates the economy and climate over the next 350 years. This takes approximately 5 minutes to solve on NEOS. To evaluate the impact of my policy I use only the data from the first 50 periods until the year 2250 to avoid issues surrounding the finite truncation of an infinite model.

The model is large nonlinear. As a result, numerical algorithms often have trouble finding optimal solutions and these algorithms cannot guarantee that a solution is a global and not a local optimum. To ensure that my solver was finding the right solution I ran my simulation from 50 randomly generated initial guesses. The simulations always converged to the same objective and so I am confident that the model can accurately optimise the level of abatements in absence of a catastrophe.

There were some numerical problems with variables in states which are only reached with very low probability. Because they are reached with such low probability, the objective function is insensitive to their value. This makes the problem ill conditioned. As such some of my parameters (especially savings rates after a catastrophe) are not accurately estimated in the final results. One way I avoided this problem was by running the simulation once, pulling out the optimal savings and abatement rates in the branches, fixing them and rerun the simulation.

This problem could have been avoided by breaking each branch down into a sub problem and solving them separately and passing the results of the sub problem back and forth to the main problem every iteration. This is a standard technique in stochastic programming (Ruszczynski, 2003). However, it would have involved considerably more coding and so I did not attempt this.

- 25 -

3. Results

In solving this model I want to examine several things. Firstly, whether the results obtained when the catastrophe risk is included in the model are substantially different to those obtained when it is excluded from the model. Secondly, how sensitive the results of the model are to our assumed parameters relating to catastrophic risk. Thirdly, I examine how the optimal policy changes in response to the occurrence of a catastrophe. Finally, I compare the results of my model to that of Nordhaus’ DICE-2007 model.

3.1. Optimal policy with and without catastrophe risk

To examine the impact of risk on the optimal climate policy, I simulate the optimal policy twice. The first time I use all of the parameters as given in the previous section where there is a small risk of catastrophe in every period. This is what I call the ‘catastrophe-adjusted policy’. The second time I run the same model again, this time I set the parameters controlling the probability � and the impact � of a catastrophe to 0. This is what I call the ‘no catastrophe policy’.

To compare the results of these policies, I look at what happens along the sequence of states where the catastrophe never eventuates (d=0). This is fairer than comparing policies by looking at the expected values since the simulation of the ‘catastrophe-adjusted scenario, will have a much higher expected cost. (An even better way to compare these policies would be to simulate the outcome of the ‘no catastrophe policy’ in a model which does include the risk of catastrophes and compare expected costs)

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 18

16

14

12

10 Catastrophe risk No catastrophe risk 8

6

4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 4 Optimal emissions of greenhouse gases with and without catastrophe risk

- 26 -

Firstly, consider the optimal trajectory of greenhouse gas emissions. Figure 4 shows that when the risk of a catastrophe is incorporated into the IAM, the resulting policy prescribes a much more stringent schedule of greenhouse gas emissions. Under the ‘catastrophe-adjusted policy’ the total emissions over the entire simulation are 45% lower than under the ‘no catastrophe policy’ (208 Gt CO2e rather than 377 Gt CO2e). Under the ‘catastrophe-adjusted policy’ annual emissions reach their peak (2115 vs. 2150) and are completely halted (2160 vs. 2195) 35 years earlier than under the ‘no catastrophe policy’.

Note that although the two policies prescribe dramatically different emissions reductions in the distant future, until the year 2050 they are very similar. This suggests that it may not be necessary to incorporate these catastrophic risks into short term policies aimed at abating greenhouse emissions today, and that there is a window of opportunity to learn more about the risks posed by catastrophes before we start incorporating them into policies.

Atmospheric stock of greenhouse gasses 900

800

700

Catastrophe risk 600 No catastrophe risk

500

400 Concentration of greenhouse gasses (parts per million) 300 2050 2100 2150 2200 2250 Year

Figure 5 Optimal atmospheric stock of greenhouse gases with and without catstrophe risk

Figure 5 shows under the ‘catastrophe-adjusted policy’. Atmospheric CO2 stock peaks 21% lower (at 648 ppm rather than 862 ppm) and 35 years earlier (in 2145 rather than 2180) than under the no catastrophe policy. This results in a significantly smaller degree of global warming.

- 27 -

Temperature increase under optimal abatement (contingent on castrophe not occuring) 4

3.5

3

2.5 Catastrophe risk No catastrophe risk 2

1.5

1 Temperature increase above preindustrial levels (°C) preindustrial above increase Temperature

0.5 2050 2100 2150 2200 2250 2300 2350 Year

Figure 6 Temperature increase under optimal abatement policy

Figure 6 shows the peak increase in global mean temperature is 0.9oC lower (at 2.8 oC above preindustrial temperatures rather than 3.8 oC) and also 35 years earlier (in 2260 rather than 2295) under the ‘catastrophe-adjusted’ policy than under the ‘no catastrophe’ policy.

Cost of greenhouse gas abatements as percent of GWP 1.6

1.4

1.2

1

Catastrophe risk 0.8 No catastrophe risk % of GWP of % 0.6

0.4

0.2

0 2050 2100 2150 2200 2250 Year

Figure 7 Cost of greenhouse gas abatements as a percent of GWP

- 28 -

Spending on emissions reductions as a percentage of GDP is shown in Figure 7. In the years before 2195 (after which both policies abate all of their emissions), the ‘catastrophe-adjusted policy’ spends 165% more money on emissions than the ‘no catastrophe policy’ (on average 0.8% of GWP per year compared with 0.3% of GWP per year).

Climate induced economic damage contingent on no catastrophe 4

3.5

3

2.5

Catastrophe risk 2 No catastrophe risk

1.5

1 Economic damage as % of GWP % of as damage Economic

0.5

0 2050 2100 2150 2200 2250 2300 2350 Year

Figure 8 Climate induced economic damage with and without catastrophe risk

As a result, non-catastrophe climate related damages are much lower under the ‘catastrophe-adjusted policy’ peaking at 2.2% of GDP (in 2250) than under the ‘no catastrophe policy’ peaking at 3.9% of GWP (in 2300). This is shown in Figure 8.

Between the two scenarios, the difference between consumption and output (after being adjusted for climate damages) is very small. This is why I have not included figures comparing consumption and output paths. Between the years 2010-2180, output and consumption are lower in the scenario with catastrophic risk than in the one without. This difference peaks at around 1% of output and consumption in 2160. This is because in the years preceding 2160 the ‘catastrophe-adjusted policy’ spends more on abatements, by reducing consumption. During this time the differences in climate damage between the two scenarios is small, and so the difference in output between the two scenarios is small.

After 2180, consumption and output are higher in ‘catastrophe-adjusted policy’ scenario. This is because the climate related damages are considerably lower in the ‘catastrophe-adjusted scenario’. There is no difference in abatement costs after 2195 because both policies abate all emissions. In the long run the ‘catastrophe-adjusted’ policy has higher consumption than the ‘no-catastrophe’ policy.

- 29 -

The savings rate converges from an initial value of 21% to a steady state of 15% (this can be seen in the next section in figure 18). This is a concern because models parameterised similar to mine have a long run steady state savings rate to be around 22%. This could be indicative of some problem with my model. The difference in saving rate between the two scenarios is negligible (the two savings rates never differ by more than 0.2%). Thus, the accumulation of capital in the two scenarios follows a similar pattern to output, again the difference in capital between the two policies is small (<1%).

So far this analysis has only considered what happens if a catastrophe does not eventuate. Figure 9 above shows the cumulative probability of a catastrophe occurring in the scenario where we include catastrophic risk. The probability of a catastrophe before 2200 occurring in a trajectory with 2.5oC warming is 12%. Recall that this is consistent with probability function we fitted previously to give the result that under a given temperature corridor which resulted in 3oC warming by 2200 there would be a catastrophe probability of 16%.

Cumulative probability of catastrophe under optimal policy 0.18

0.16

0.14

0.12

0.1

0.08 Probability

0.06

0.04

0.02

0 2050 2100 2150 2200 2250 Year

Figure 9 Cumulative probabilty of a catastrophe

We can also calculate the probability of a catastrophe occurring if the optimal ‘no catastrophe’ policy were followed, in a world where there were actually catastrophic risks. In this situation the cumulative probability of a catastrophe occurring is 20%. This means that by incorporating stochastic risk into our policy model, the risk of a catastrophe occurring is reduced by 65% (from 20% to 12%).

3.2. Sensitivity of Results to catastrophe parameters

As previously discussed, the results of IAMs are highly sensitive to the selection of the model’s parameters. Other authors such as Nordhaus (2007) or Ackerman and Finlayson (2006) have

- 30 - conducted thorough sensitivity analyses of IAMs. They conclude that IAMs are most sensitive to the following parameters: discount rates, risk aversion parameters, damage function parameters, and the sensitivity of temperature to CO2 stock.

However, for the purposes of this dissertation I want to only focus on the four parameters most relevant to my model, those controlling the risk and impact of catastrophes.

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 18

16

14

12 75% GWP Shock 10 50% GWP Shock 25% GWP Shock 8 12.5 GWP Shock No Shock 6

4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 10 Sensitivity of optimal emissions schedule to the magnitude of catastrophic shock

Firstly, let’s consider how sensitive the results are to the consequences of a catastrophe occurring. Figure 10 shows the results of varying the magnitude of the catastrophic shock on the optimal trajectory of emissions. In my model I assume that a catastrophe wipes off 25% of GDP (the red line). Varying this parameter between 0% and 75% of GWP the results are very different policy outcomes. Even if the size of the catastrophe is assumed to only be the 12.5% of GDP the optimal policy is substantially different to that we get if there is no catastrophe risk. From these results, I can infer that 1% increase in the size of the catastrophic shock corresponds roughly to a 14Gt reduction in the total emissions of greenhouse gasses.

Next, consider the sensitivity of the probability of a catastrophe occurring. The equation controlling

(,) the probability of a catastrophe occurring φ(t) = �� + γ has 3 parameters. I will now look at the sensitivity of the first two of these parameters � and �. The first factor � will increases the probability of a catastrophe increasing across all temperatures by the same factor, whereas the second factor �will increase the curvature of the damage function resulting in relatively higher probabilities of a catastrophe at higher temperatures. Figures 11 and 12 show the results of changing these parameters.

- 31 -

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 14

12

10 = 0.0002 1 = 0.0004 8 1 = 0.0008 1 6 = 0.0016 1 = 0.0032 1 4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 11 Sensitivity of emissions to ��

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 16

14

12

 = 0.27 10 2 = 0.41 2 8 = 0.54 2 = 0.68 2 6 = 0.81 2

4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 12 Sensitivity of emissions to ��

These graphs suggest that although optimal policy in the distant future is highly sensitive to the parameters controlling the risk, the optimal policy in the near future (say the next 50 years) is not. This is important because it suggests that prior knowledge of the risks associated with catastrophes is not that important for setting policy for the near future. This gives us a window of time in which we can research and learn about the risks in order to better set optimal policy in the future.

Finally consider the sensitivity of the risk aversion parameter � in the utility function

() ��(�) = + 1. This parameter controls the premium that individuals are willing to pay to

- 32 - avoid differences in their consumption both across time, and across different states of the world (with and without catastrophes). Note that � = 1 corresponds to the case of natural log utility ��(�) = ln�(�).

Figure 13 shows that the optimal policy is highly sensitive to the degree of risk aversion. As previously discussed, this partially explains the differences in results between Stern (2006) and Nordhaus (2008). Stern assumes a risk aversion coefficient of � = 1 whereas Nordhaus assumes a coefficient of � = 2.

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 15

10 Rsk Avsn Coeff =0.5 Rsk Avsn Coeff =1.0 Rsk Avsn Coeff =1.5 Rsk Avsn Coeff =2.0 Rsk Avsn Coeff =2.5 5 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 13 Sensitivity of optimal emissions under catastrophe risk to Risk Aversion

One might expect that the risk aversion parameter is more sensitive in a scenario where there is a catastrophe risk than in a scenario without catastrophes. Figure 14 compares optimal emission path in different scenarios with high (� = 1) and low (� = 2) risk aversion in the models with and without catastrophe risk. Changing from low to high risk aversion causes total emissions to fall by 63% in under the policy with ‘no catastrophe risk’ and by 65% under the ‘risk-adjusted’ catastrophe policy. It does seem that risk aversion is more sensitive when we incorporate risky catastrophes into our model, but only to a small extent

- 33 -

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 18

16

14

12

Low RiskAvsn, Cat 10 High RiskAvsn, Cat High RiskAvsn, No Cat 8 Low RiskAvsn, No Cat

6

4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 14 The impact of risk aversion on simulations with and without catastrophes

3.3. Optimal Policy Recovery after a catastrophe

Every time I run the simulation it produces not one, but multiple paths of emissions. It produces one emissions path for every contingency where a catastrophe occurs in a different period. This is best illustrated by Figure 2 in Section 3.4. So far we have only examined the impacts along the set of states in which the catastrophe never occurs. It is also informative to observe the results down some of these other branches. That is, to observe how the optimal policy changes to recover from a catastrophe.

Due to numerical problems with the solution of the model the results in section should be examined with caution. Many of the states only occur with very small probability. The objective function is highly insensitive to variables in these states and so the optimisation software struggled to correctly find optimal estimates. This can be seen in the plots in this section which have incorrect jagged and oscillating curves, rather than the smooth ones expected from an optimal solution. However, these results still tell us important information. Firstly, a general trend can still be inferred from these imprecise results and secondly, we learn which variables are not important because the optimal solution is not sensitive to their values.

In the results that follow I look specifically at four of these branches corresponding to states of the world where one-off catastrophes occur in the years 2050, 2100, 2150 and 2200.

Figure 15 shows the direct impact of a catastrophe, the climatic damages immediately jump by 25% of GDP.

- 34 -

Climate damage after catastrophe 30

25

20 Catastrophe in 2050 Catastrophe in 2100 15 Catastrophe in 2150 Catastrophe in 2200 No Catastrophe

Damage (% of GWP) 10

5

0 2050 2100 2150 2200 2250 Year

Figure 15 A catastrophe causes the damage function to jump by 25%

This huge jump in damages has a flow on effect on damage adjusted output (Figure 16). After a catastrophe occurs, output converges to a new steady state growth path.

Output after catastrophe 4000

3500

3000

2500 Catastrophe in 2050 Catastrophe in 2100 2000 Catastrophe in 2150 Catastrophe in 2200 1500 No Catastrophe

1000 Gross World Product (Trillion USD 2000 500

0 2050 2100 2150 2200 2250 Year

Figure 16 The impact of a catastrophe on the growth of output

Figure 17 shows the effect of this fall in output (GWP) on the savings rate. When there is an output shock the optimal response is to reduce savings in order to smooth consumption in the periods immediately following shock. As a result, savings is shocked away from its previous path and converges back towards steady state on a new trajectory. This was the least numerically stable result I obtained, the objective function is highly insensitive to the savings in later periods if a catastrophe occurs early on with a low probability.

- 35 -

Savings rate after catastrophe 0.22

0.2

0.18

Catastrophe in 2050 0.16 Catastrophe in 2100 Catastrophe in 2150 0.14 Catastrophe in 2200 No Catastrophe Savings (% of GWP) of (% Savings 0.12

0.1

0.08 2050 2100 2150 2200 2250 Year

Figure 17 The impact of a catastrophe on the savings rate

Consider the impact of a catastrophe on the optimal emissions trajectory. As earlier discussed, in my model, the incentives to abate greenhouse gas emissions are twofold. Firstly the climatic damage associated with raised temperatures is avoided. Secondly the risk of a catastrophe occurring and imposing even greater damages is reduced. After a catastrophe has occurred, there is no further risk of another catastrophe occurring (this is not necessarily realistic). The second incentive to reduce emissions goes away in the aftermath of a catastrophe.

Further, because the quantities and abatement costs are based on pre-damage output, the emissions intensity of output and the relative cost of abetments have now increased (also not necessarily realistic). The impact of these two factors is that there are much larger emissions of greenhouse gasses after a catastrophe has occurred, and so a much higher peak stock of atmospheric greenhouse gasses and global warming. This is shown in Figures 18 and 19. The results here are not very smooth and so the solution is not very stable.

- 36 -

Greenhouse gas emissions after catastrophe 20

18

16

14

12 Catastrophe in 2050 Catastrophe in 2100 10 Catastrophe in 2150 Catastrophe in 2200 8 No Catastrophe

6

4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 18 The impact of a catastrophe on greenhouse gas emissions

Atmospheric stock of greenhouse gassess after catastrophe 5500

5000

4500

4000

3500 Catastrophe in 2050 Catastrophe in 2100 3000 Catastrophe in 2150 Catastrophe in 2200 2500 No Catastrophe

2000

1500

1000 Concentration of greenhouse gasses (parts per million) 500 2050 2100 2150 2200 2250 Year

Figure 19 The impact of a catastrophe on greenhouse gas stock

3.4. Comparison with Nordhaus’ DICE-2007 Model

As previously discussed, my model is similar to and largely based on Nordhaus’ DICE-2007 model. There are some key differences between my model and Nordhaus’ DICE-2007. DICE assumes that the parameters for emissions intensity of output and productivity decline over time. DICE has a backstop technology, an expensive price at which all emissions can be abated which falls over time. It also has a more complex temperature model which incorporates heat transfer between the oceans and atmosphere in his model. Lastly in DICE there is an additional component in his damage function especially to account for the impacts of sea level rise.

- 37 -

Due to these differences, my results are not directly comparable to those of DICE. However it is worth comparing my results to DICE so that we can make an inference about the reasonableness of my results. The results shown from Nordhaus’ DICE-2007 model have been obtained from a spreadsheet model of available on his website.

Figure 20 shows the optimal emissions trajectory in my model and in Nordhaus’. Unfortunately it highlights some major problems with my model. Compare the green series (my emissions without catastrophe risk) to the red series (DICE’s emissions). The first problem with my results is that I grossly underestimate the initial baseline of emissions. I am unclear why has occurred since I used similar initial condition to Nordhaus. This may be partially (but not entirely) explained by my assumption on the initial emissions intensity of output being about a third lower than Nordhaus.

Secondly, Nordhaus has a much more symmetric path of emissions than my model. This could be because Nordhaus’ model assumes that the emissions intensity of output falls over time, reducing the growth of emissions, whereas my model does not.

Greenhouse gas emissions under optimal policy (contingent on castrophe not occuring) 18

16

14

12

10 Catastrophe risk No catastrophe risk 8 Nordhaus DICE 07

6

4

2 Metric gigatonnes of carbon dioxide equlvalent per year per equlvalent dioxide of carbon gigatonnes Metric 0 2050 2100 2150 2200 2250 Year

Figure 20 Optimal emissions trajectory, comparison with Nordhaus

Nordhaus’ emissions peak much sooner and lower, as a result his temperature also peaks lower and sooner than in my model. This is shown in Figure 21.

- 38 -

Temperature increase under optimal abatement (contingent on castrophe not occuring) 4

3.5

3

2.5 Catastrophe risk No catastrophe risk 2 Nordhaus DICE 07

1.5

1 Temperature increase above preindustrial levels (°C) preindustrial above increase Temperature

0.5 2050 2100 2150 2200 2250 Year

Figure 21 Temperature Increase under optimal abatement, comparison with Nordhaus

Expenditure on abatements is shown in Figure 22. Nordhaus abates more of his greenhouse gasses early. He initially spends more of his output on greenhouse gas abatements. Nordhaus also takes longer to reach 100% abatement, hence his expenditure on abatement peak later than mine. Differences in parameters controlling emissions intensity of output and the cost of abating greenhouse emissions between the two models result in the different curvature on the downward part of the curve.

Cost of greenhouse gas abatements as percent of GDP 1.6

1.4

1.2

1 Catastrophe risk 0.8 No catastrophe risk Nordhaus DICE 07 % of GWP of % 0.6

0.4

0.2

0 2050 2100 2150 2200 2250 Year

Figure 22 Cost of greenhouse abatements, Comparison with Nordhaus.

There are significant differences between the results between my model and Nordhaus. Some of these differences could reflect bugs in my model. However I do not think that these problems detract from

- 39 - my main argument, that drastically different results obtained by IAMs when catastrophic risk is taken into account.

- 40 -

4. Conclusions Uncertainty and catastrophic risk are central to determining the best policy for dealing with climate change. The interaction of uncertainty with risk aversion and irreversibility, imply a more conservative approach to greenhouse gas emissions than would otherwise be optimal. This interaction can become so large that uncertainty dominates all other factors in the IAM as shown by Weitzman (2009).

In my model, I considered just one uncertainty element: the risk of a discrete jump in the damage function as a result of a catastrophe. Each period there is a random probability of a catastrophe. This probability increases with temperature. The optimisation chooses an outcome in every state of the world. As such the optimal policy adapts to different realisations of catastrophes as they occur. Otherwise, I have made standard assumptions and kept the model simple.

My model has four key results:

Firstly, a significantly more stringent emissions policy is optimal in the presence of catastrophic risk. Failing to incorporate the risk of catastrophes into policy increases the risk of such a catastrophe occurring by 65%. The optimal policy will have total greenhouse gas emissions 45% lower than when catastrophic risk is ignored. These emissions peak, and are completely halted 35 years earlier. These results are qualitatively consistent with Keller et al. (2004), Guillerminet and Tol (2008) and Baranzini et al. (2003) who model similar catastrophic risks.

Secondly, the results are highly sensitive to the parameters controlling the likelihood and impact of a catastrophe. This means that accurate knowledge of catastrophic risk is crucial to precisely defining optimal abatement policies. This also is consistent with Keller et al. (2004) and Baranzini et al. (2003). The results are also sensitive to social attitudes to risk and time preference.

Thirdly, if a catastrophe is realised, then the incentive to abate emissions is reduced. This is partially because there is no risk of a second catastrophe occurring. However it is also because post- catastrophe, the benefits of consumption smoothing are larger than those of investing in the now relatively more expensive abatements. This is consistent with Guillerment and Tol (2008).

Fourth, in the next 50 years there is little difference between the optimal polices with and without catastrophic risk. This may be a result of my model’s peculiarly long emissions ramp. However, if this result is robust, it suggests there is a window of opportunity to better learn about the risks associated with climate related catastrophes, before we are required to put policies in place to closely manage these risks.

- 41 -

My model does have major problems: I misspecified some parameters, my simulations had numerical problems and I could not replicate Nordhaus’ results. However, these weaknesses do not undermine my key models’ key qualitative result: The outcome of an IAM is significantly different in the presence of catastrophic risk than it would be otherwise.

This result has several implications for the direction of future research and policy on climate change.

Firstly, research in both climate science and economics should more focus on the implications of low probability, high cost catastrophe risk. Until recently, it was presumed that the central tendencies of climate change were sufficient determining optimal policy. As such, much of the effort in climate science and economics has gone into modelling these central tendencies. My results support Weitzman’s conclusions, that low probability, high cost catastrophes play a role just as important in climate policy (if not more so). As such, more research is needed on catastrophic risk to better inform climate policy.

Secondly, it is important to distinguish between stochastic and non-stochastic risk. These two types of risk require different approaches. For a risk which is fundamentally chaotic, the emphasis should be on monitoring, identifying risk factors and reducing these factors where economic. For a non- stochastic risk more effort should be put into research and modelling to learn about the risk and identifying critical points. Reducing uncertainty about these risks allows them to be managed in a more flexible manner.

Thirdly, before an IAM is used for policy analysis, it should be able to adequately deal with the risk and uncertainty surrounding the problem. I agree with Weitzman (2009) that the burden of proof should lie with those who use such model to show that their results are robust to the inclusion of catastrophic risk. Ignoring uncertainty in integrated assessment is dangerous and polices based on models which do so place society in undue risk.

Fourthly, there is need for an explicit societal discussion about global attitudes to risk and time preference. The results of IAMs rest very strongly on these salient assumptions about risk aversion coefficients and discount rates. Yet this issue is not well understood by non-economists and is seldom debated in the public arena.

Finally, the presence of uncertainty and the catastrophic risk only strengthens the case for urgent international action to put meaningful long term policies in place to effectively manage global greenhouse gas emissions. Any discussion about ‘optimal’ policy is empty if we cannot co-operate to put such policies in place.

- 42 -

Appendices

I. Code: Model File

# SIAM_v1-8 # A Stochastic Integrated Assessment Model # Oliver Browne # 20/6/2011

# Model File reset; param eps := 10^-5;

# t is the index of the time periods # v is the index of when the catastrophe occurs. # if v

# # Estimated Parameters # param T; #number of Periods param dur; #duration of each period param alpha; #cobb douglas production elasticity of capital param Y0; #initial output (trillion dollars US) param L0; #initial population (population million) param Linf; param K0; #initial capital (trillion dollars 2000) param Agrowth; #annual growth of TFP param Lgrowth; #annual growth of population param discrate; #annual discount rate param dpcnannual; #annual depreciation param rskavsion; param stock0; #Atmospheric Greenhousegas Stock in 2010 param stock1750; #Atmospheric Greenhousegas Stock in 1750 param stkbiosphere0; param stkocean0; param Temp0; #initial change in temperature above 1900 param Likefact; param Likeexp param Likeconst;

param shock; #shock caused by catastrophe param abatcoef; #abatement coefficent1/delta*1/(1+delta)^t param abatexp; #abatement exponent param Dcoef; #damage coefficient param Dexp; #damage exponent param emiscoef; #emission coefficient param Fcoef; #change in forcing caused by stock param Tcoef; #change in temperature caused by radiative forcing param b11; #carbon flow: atmosphere -> atmosphere param b21; #carbon flow: biosphere -> atmosphere param b12; #carbon flow: atmosphere -> biosphere param b22; #carbon flow: biosphere -> biosphere param b32; #carbon flow: oceans -> biosphere param b23; #carbon flow: biosphere -> oceans param b33; #carbon flow: oceans -> oceans

I

# # Calculated Parameters # param A0 := Y0/(L0^(1-alpha)*K0^alpha); #initial TFP param a:= (1+Agrowth)^dur-1; #TFP growth rate param l := (1+Lgrowth)^dur-1; #Population Growth Rate param A{t in 1..T,v in 1..t} := A0*(1+a)^(t-1); #TFP param delta := (1+discrate)^dur-1; # discount rate param dpcn := (1+dpcnannual)^dur-1; # depreciation rate param discfact{t in 1..T,v in 1..t} = # discount factors if t = T then 1/delta*1/(1+delta)^(t-2) else 1/(1+delta)^(t-1); param L{t in 1..T,v in 1..t} := if t=1 then L0 else L[t-1,1]*(Linf/L[t-1,1])^l;

# # Variables # var Y{t in 1..T,v in 1..t} >=eps := 50; #output var Yadj{t in 1..T,v in 1..t} >=eps := 50; #output var K{t in 1..T,v in 1..t} >=eps := K0; #capital var c{t in 1..T,v in 1..t} = #individual consumption max(Yadj[t,v]*(1-s[t,v])*1000/L[t,v],eps); var U{t in 1..T,v in 1..t} = #individual utility c[t,v]^(1-rskavsion)/(1-rskavsion)+1; var abate {t in 1..T,v in 1..t} >=eps <=1 :=0.3; #abatments var abatecost{t in 1..T,v in 1..t} #cost of abetments = 0.95^(t-1)*abatcoef*abate[t,v]^abatexp; var Temp {t in 1..T,v in 1..t} >=eps <=20 :=Temp0; #Temperature change above 1750 var F {t in 1..T,v in 1..t} >=eps :=1.6; #Radiative Forcing var D {t in 1..T,v in 1..t} >=eps; #Damage Function var stock {t in 1..T,v in 1..t} >=eps :=stock0; #Stock of GHG var catprob {t in 1..T,v in 1..t} = #Probability of a catastrophe if t=v then min(Likefact*exp(Likeexp*Temp[t,v]),1)+Likeconst else 1; var probfact{t in 1..T,v in 1..t} >=0 <=1; #Probability of being in state v at time v var stkbiosphere {t in 1..T,v in 1..t} >=eps := stkbiosphere0; #Stock of GHG in upper ocean var stkocean {t in 1..T,v in 1..t} >=eps := stkocean0; #Stock of GHG in lower ocean

# # Optimisation # maximize PresUtility: sum{t in 1..T, v in 1..t} U[t,v]*discfact[t,v]*probfact[t,v]; subject to CapitalMotion {t in 2..T, v in 1..t}: K[t,v] = if t=v then (1-dpcn)*K[t-1,v-1] + s[t-1,v-1]*Yadj[t-1,v-1]*dur else (1-dpcn)*K[t-1,v] + s[t-1,v]*Yadj[t-1,v]*dur; subject to Output {t in 1..T, v in 1..t}: log(max(Y[t,v],eps)) = log(A[t,v]) + alpha*log(max(K[t,v],eps)) + (1-alpha)*log(L[t,v]); subject to AdjustedOutput {t in 1..T, v in 1..t}: Yadj[t,v] = Y[t,v]*(1-abatecost[t,v])/D[t,v]; subject to cumlstock{t in 2..T, v in 1..t}: stock[t,v] = if t=v then emiscoef*Y[t,v]*(1-abate[t,v])*dur + b11*stock[t-1,v-1] + b21*stkbiosphere[t-1,v-1] else emiscoef*Y[t,v]*(1-abate[t,v])*dur + b11*stock[t-1,v] + b21*stkbiosphere[t-1,v]; subject to biostock{t in 2..T, v in 1..t}: stkbiosphere[t,v] = if t=v then b12*stock[t-1,v-1] + b22*stkbiosphere[t-1,v-1] + b32*stkocean[t-1,v-1] else b12*stock[t-1,v] + b22*stkbiosphere[t-1,v] + b32*stkocean[t-1,v];

II

subject to oceanstock{t in 2..T, v in 1..t}: stkocean[t,v] = if t=v then b23*stkbiosphere[t-1,v-1] + b33*stkocean[t-1,v-1] else b23*stkbiosphere[t-1,v] + b33*stkocean[t-1,v]; subject to radforce {t in 1..T, v in 1..t}: F[t,v] = Fcoef*log(max(stock[t,v]/stock1750,eps))/log(2); subject to tempchange{t in 2..T, v in 1..t}: Temp[t,v] = if t=v then Temp[t-1,v-1] + Tcoef*(F[t,v] – 1.3*Temp[t-1,v-1]) else Temp[t-1,v] + Tcoef*(F[t,v] – 1.3*Temp[t-1,v]); subject to calcprobfact{t in 2..T,v in 1..t}: probfact[t,v] = if t=v then probfact[t-1,v-1]*(1-catprob[t-1,v-1]) else probfact[t-1,v]*catprob[t-1,v]; subject to damage{t in 1..T, v in 1..t}: D[t,v] = if t=v then (1 + Dcoef*Temp[t,v]^Dexp) else (1 + Dcoef*Temp[t,v]^Dexp)/shock; subject to InitialCapital: K[1,1] = K0; subject to InitialStock: stock[1,1] = stock0; subject to InitialTemperature: Temp[1,1] = Temp0; subject to InitialProbfact: probfact[1,1] = 1; subject to InitialOcean: stkocean[1,1] = stkocean0; subject to InitialBiosphere: stkbiosphere[1,1] = stkbiosphere0;

III

II. Code: Data File

# SIAM_v1-8 # A Stochastic Integrated Assessment Model # Oliver Browne # 20/6/2011

# Datafile

# # Parameters # param T := 70; #number of Periods param dur := 5; #duration of each period param alpha := 0.3; #cobb douglas production elasticity of capital param Y0 := 67.8; #initial output (trillion dollars US) param L0 := 6437; #initial population (population million) param Linf := 8700; param K0 := 180; #initial capital (trillion dollars 2000) param Agrowth := 0.01; #annual growth of TFP param Lgrowth := 0.0403; #annual growth of population param discrate := 0.015; #annual discount rate param dpcnannual := 0.01; #annual depreciation param rskavsion := 2; #Risk Aversion param stock0 = 800; #Atmospheric Greenhousegas Stock in 2010 param stock1750 = 596.4; #Atmospheric Greenhousegas Stock in 1750 param stkbiosphere0 = 1600; #Biospheric Greenhousegas Stock in 2010 param stkocean0 = 10010; #Oceanic Greenhousegas Stock in 2010 param Temp0 = 0.731; #initial change in temperature above 1900 param shock = 0.75; #shock caused by catastrophe param abatcoef = 0.07; #abatement coefficent1/delta*1/(1+delta)^t param abatexp = 2.8; #abatement exponent param Dcoef = 0.00284; #damage coefficient param Dexp = 2; #damage exponent param Likefact = 0.00081; #catastrophe probability coefficient param Likeexp = 0.54; #catastrophe probability exponent param Likeconst = -0.00034; #catastrophe probability constant param emiscoef = 0.11; #emission coefficient param Fcoef = 3.8; #change in forcing caused by stock param Tcoef = 0.19; #change in temperature caused by radiative forcing param b11 = 0.9417; #carbon flow: atmosphere -> atmosphere param b21 = 0.0232; #carbon flow: biosphere -> atmosphere param b12 = 0.0583; #carbon flow: atmosphere -> biosphere param b22 = 0.9743; #carbon flow: biosphere -> biosphere param b32 = 0.0004; #carbon flow: oceans -> biosphere param b23 = 0.0025; #carbon flow: biosphere -> oceans param b33 = 0.9996; #carbon flow: oceans -> oceans

IV

III. Table of Parameters

Variable Description My Nordhaus' Units Notes Parameters Parameters

T Number of time periods 70 50

Duration of time periods 5 10 years

� Income share of capital 0.3 0.3

�� Initial Output 67.8 67.8 billion dollars

�� Initial population 6437 6437 million people

� Assymptotic popluation 8700 8700 million people

�� Initial capital 180 180 % per annum

� Productivity growth rate 1 3 % per annum Nordhaus starts at 3% and declines by 16% per decade

� Adjustment rate of population growth 4.03 4.03 % per annum

� Social rate of time preference 1.5 1.5 % per annum

�� Depreciation 1 10 % per annum I accidentally mis-specified this parameter for all my simulations.

� Risk aversion 2 2

V

Variable Description My Nordhaus' Units Notes Parameters Parameters

����(����) Stock of Ghg in 2010 800 800 Gt CO2e

����(����) Stock of Ghg in 1750 596.4 596.4 Gt CO2e

����(����) Stock of biospheric Ghg in 2010 1600 1600 Gt CO2e

����(����) Stock of oceanic Ghg in 2010 10010 10010 Gt CO2e

�(����) Initial increase in temperature above pre 0.731 0.731 oC

��(����) Initial abatement coefficent 7 6.5

Rate of decrease of abatment coefficent 0.01 Endogenous % per annum Nordhaus’ abatement coefficients are endogenously determined by technological growth module

�� Abatement exponent 2.8 2.8

�� Damage function coefficent 0.00284 0.00284 Nordhaus also includes damages from sea level rise

�� Damage function exponent 2 2

�� Emissions function coefficent 0.11 Initially 0.14 Nordhaus’ emissions coefficents fall over time at a rate endogenously determined by technological growth module

VI

Variable Description My Nordhaus' Units Notes Parameters Parameters

���� Emissions function constant 0 1.6 Nordhaus external emissions fall over time by 20% per decadeto zero

� Radiative forcing function coefficient 3.8 3.8

�� Temperature adjustment coefficient on radiative 0.19 0.19 Nordhaus has more complex forcing function for temperature adjustment including ocean temperatures.

�� Temperature adjustment coefficient on previous 0.753 0.753 temperature

��,� Carbon flow rate: atmosphere to atmosphere 0.9417 0.9417 % flow per 5 years

��,� Carbon flow rate: atmosphere to biosphere 0.0232 0.0232 % flow per 5 years

��,� Carbon flow rate: bioshpere to atmosphere 0.0583 0.0583 % flow per 5 years

��,� Carbon flow rate: biosphere to biosphere 0.9743 0.9743 % flow per 5 years

��,� Carbon flow rate: biosphere to ocean 0.0004 0.0004 % flow per 5 years

��,� Carbon flow rate: ocean to biosphere 0.0025 0.0025 % flow per 5 years

��,� Carbon flow rate: ocean to ocean 0.9996 0.9996 % flow per 5 years

� Shock 0.75 NA % of GWP not part of DICE model

VII

Variable Description My Nordhaus' Units Notes Parameters Parameters

�� Catastrophe probability function coefficient 0.00081 NA not part of DICE model

�� Catastrophe probability function exponent 0.054 NA not part of DICE model

�� Catastrophe probability function constant -0.00034 NA not part of DICE model

VIII

IV. Fitting Damage Function Parameters

Oliver Browne Fitting Probability Parameters I used excel solver to find the parameters gamma1, gamma2, gamma3 which minimised sum of the residual between the predicted cumulative probability of catastrophe and the probability estimated in Kreigler et al. This served as the function used to calculate the probability of a catastrophe in my IAM

Catprob = gamma1 * exp ( -gamma2 * Temp) - gamma3

Parameters gamma1 gamma2 gamma3 Medium Temp 0.00081 0.54 0.00034 Cumulative Probability 0.160005 Kreigler et al. 0.16 Medium Temperature High Temperature Year Temp CatProb Temp CatProb High Temp 2010 0.00 0.00013 0.00 0.0000 Cumulative Probability 0.584029 2015 0.19 0.00023 0.20 0.0002 Kreigler et al 0.56 2020 0.36 0.00034 0.44 0.0004 Cumulative Probability 2025 0.52 0.00046 0.84 0.0005 Residual 2030 0.68 0.00059 1.21 0.0006 0.000577395 2035 0.82 0.00073 1.56 0.0008 2040 0.96 0.00089 1.87 0.0009 2045 1.09 0.00106 2.17 0.0011 2050 1.21 0.00124 2.44 0.0013 2055 1.32 0.00143 2.70 0.0017 2060 1.43 0.00163 2.93 0.0021 2065 1.53 0.00184 3.15 0.0027 2070 1.63 0.00206 3.36 0.0033 2075 1.72 0.00229 3.55 0.0040 2080 1.81 0.00253 3.73 0.0048 2085 1.89 0.00278 3.90 0.0057 2090 1.97 0.00303 4.05 0.0067 2095 2.04 0.00329 4.20 0.0078 2100 2.11 0.00355 4.34 0.0090 2105 2.18 0.00382 4.47 0.0102 2110 2.24 0.00409 4.59 0.0116 2115 2.30 0.00437 4.71 0.0130 2120 2.36 0.00465 4.82 0.0145 2125 2.41 0.00493 4.92 0.0161 2130 2.47 0.00522 5.01 0.0177 2135 2.52 0.00550 5.11 0.0194 2140 2.56 0.00579 5.19 0.0211 2145 2.61 0.00608 5.27 0.0230 2150 2.65 0.00636 5.35 0.0248 2155 2.70 0.00665 5.43 0.0267 2160 2.74 0.00693 5.49 0.0286 2165 2.77 0.00722 5.56 0.0306 2170 2.81 0.00750 5.62 0.0326 2175 2.85 0.00778 5.68 0.0346 2180 2.88 0.00806 5.74 0.0366 2185 2.91 0.00833 5.79 0.0386 2190 2.94 0.00860 5.84 0.0407 2195 2.97 0.00889 5.92 0.0427 2200 3.00 0.00903 6.00 0.0451

IX

References

Ackerman, F., S.J. DeCanio, R.B. Howarth and K. Sheeran, “Limitations of Integrated Assessment Models of Climate Change,” Climate Change 95 (April 2009), 297-315.

Ackerman, F., and I.J. Finlayson, “The Economics of Inaction on Climate Change: A Sensitivity Analysis,” Climate Change Policy 6 (2006), 509-526.

Alley, R. B., J. Marotzke, W. D. Nordhaus, J. T. Overpeck, D. M. Peteet, R.A. Pielke Jr., R. T. Pierrehumbert, P.B. Rhines, T. F. Stoker, L. D. Talley and J. M. Wallace, “Abrupt Climate Change,” Science 299 (March 2003), 2005-2010.

Anthoff, D., and R. S. J. Tol, “Climate Policy Under Fat-Tailed Risk: An Application of FUND,” Working Paper No. 348, ERSI, June 2010.

Baranzini, A., Chesney, M., Morisset, J,. “The Impact of Possible Climate Catastrophes on Global Warming Policy”, Energy Policy, 31, (June 2003), 691-701.

Fell, G.H., I. A. MacKenzie and W. A. Pizer “Prices versus Quantities versus Bankable Quantities,” Discussion Paper, Resources for the Future, July 2008.

Guillerminet, M. L., and R. S. J. Tol “Decision Making Under Catastrophic Risk and Learning: The Case of the Possible Collapse of the West Antarctic Ice Sheet,” Climate Change 91 (September 2008), 193-209.

Hall, D. C., and R. J. Behl, “Integrating Economic Analysis and the Sciencee of Climate Instability,” Ecological Economics 57 (July 2005), 442-465.

Hope, C., “The Marginal Impact of CO2 from PAGE2002: An Integrated Assessment Model Incorporating the IPCC’s Five Reasons for Concern,” The Integrated Assessment Journal 6 (2006) 19-56.

IPCC, “Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change”, [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. (Cambridge, Cambridge University Press, 2007).

IPCC, “Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change”, [M.L. Parry, O.F. Canziani, J.P. Palutikof, P.J. van der Linden and C.E. Hanson (eds.)]. (Cambridge, Cambridge University Press, 2007).

IPCC, “Mitigation of Climate Change. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change”, [B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, L.A. Meyer(eds.)]. (Cambridge, Cambridge University Press, 2007).

Joos, G., and G, Muller-Furstenberger, and G. Stephan, “Correcting the Carbon Cycle Representation: How Important is it for the Economics of Climate Change,” Environmental Modeling and Assessment 4 (1999) 133-140.

X

Keller, K., B. M. Bolker and D. F. Bradford, “Uncertain Climate Thresholds and Optimal Economic Growth,” Journal of Environmental Economics and Management 48 (2004) 723 -741.

Kimfert, C., “An Integrated Assessment Model of Economy-Enery-Climate – The Model Wiagem,” Integrated Assessment 3 (2002) 281-298.

Kennett, J. P., K. G. Cannariato, I. L. Hendy and R. J. Behl Methane Hydrates in Quaternary Climate Change: The Clathrate Gun Hypothesis (Washington DC: Amer Geophysical Union, 2002).

Kriegler, E., J. W. Hall, H. Held, R. Dawson and H. J. Schellnhuber, “Imprecise Probability Assessment of Tipping Points in the Climate System,” PNAS 106 (March 2009), 5041-5046.

Lenton, T. M., H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf and H. J. Schellnhuber, “Tipping Elements in the Earth’s Climate System,” PNAS 105 (February 2008), 1786-1793.

McKibben, B., The Global Warming Reader (New York: OR Books, 2011).

Meinshausen, M., N. Meinshausen, W. Hare, S. C. B. Raper, K. Frieler, R. Knutti, D. J. Frame and M. R. Allen, “Greenhouse-gas emissions targets for limiting Global Warming to 2oC Degrees Celsius,” International Weekly Journal of Science 458 (April 2009), 1158-1162.

Pizer, W. A., “Prices vs. Quantities Revisited: The Case of Climate Change,” Discussion Paper No. 98-02, Resources for the Future, October 1997.

Pindyck, R. S., “Uncertainty in Environmental Economices,” Review of Environmental Policy 1 (2007) 45-65.

Ruszczynski, R., A. Shapiro., Stochastic Programming, Handbooks in Operations Research and Management Science, Vol. 10 (Amsterdam: Elsevier Science, 2003).

Stern, N., The Economics of Climate Change: the (Cambridge: Cambridge University Press, 2006).

Tol, R. S. J. “Is the Uncertainty about Climate Change too Big for a Cost Benefit Analysis?” Climate Change 56(3), 265-289.

Nordhaus, W. D., “Accompanying Notes and Documentation on Development of DICE-2007 Model: Notes on DICE 2007.delta.v8 as of September 21, 2007,” Lab Notes, Yale University, 2007. Available on http://nordhaus.econ.yale.edu/DICE2007.htm. Retrieved June 19, 2011.

Nordhaus, W. D., A Question of Balance (New Haven: Yale University Press, 2008).

Nordhaus, W. D., “An Analysis of the Dismal Theorem,” Discussion Paper, Yale University, January 2009. Available from http://nordhaus.econ.yale.edu/recent_stuff.htm. Retrieved June 20, 2011.

Nordhaus, W. D., “Economic Aspects of Global Warming in a Post-Copenhagen Environment,” PNAS 107(26) (2010), 11721-11726.

Nordhaus, W. D., J. Boyle., “Warming the World: Economics Models of Global Warming” (Cambridge: MIT Press, 2000).

XI

Weitzman, M. L., “Prices vs. Quantities” Review of Economic Studies 41 (1974), 477–491.

Weitzman, M. L., “Optimal Rewards for Economic Regulation,” American Economic Review 68 (1978), 683–691.

Weitzman, M. L., “On Modelling and Interpreting the Economics of Catastrophic Climate Change,” The Review of Economic and Statistics 91(1) (2009), 1-19.

Wigley, T., “MAGICC/SCENGEN 5.3: User Manual (Version 2)”, User Manual, University Corporation for Atmospheric Research, Boulder, 2008. Available from http://www.cgd.ucar.edu/cas/wigley/magicc/UserMan5.3.v2.pdf . Retrieved June 20, 2011.

XII