Optimal Emissions of Greenhouse Gasses Under Stochastic Catastrophe Risk
Oliver Browne Supervisor: Dr Stephen Poletti
This dissertation is presented in part fulfilment of the requirements for the Degree of BA(hons) of The University of Auckland.
6/27/2011
Abstract
A central issue in climate change economics is estimating the optimal trajectory of greenhouse gas emissions. The standard tool for estimating these trajectories are Integrated Assessment Models (IAMs). Most IAMs are non-stochastic and parameterised by scientific best estimates of what are profoundly uncertain parameters. Because they fail to model this uncertainty, these IAMs may understate the extent to which we should abate future greenhouse gas emissions. One risk which is not captured by existing IAMs is the possibility of a low probability, high cost catastrophe, where there is a dramatic change in the earth’s climate, leading to large and rapid economic damages. To model this risk I construct a modified version of Nordhaus’ DICE model in which there are multiple states of the world, with and without climate catastrophes for each period of time. Probability of a catastrophe occurring is governed by the degree to which global average temperatures have increased above pre- industrial levels. This model with stochastic tipping points leads to a more conservative emissions trajectory than if there is no risk of such a tipping point. This suggests one way in which the non- stochastic IAMs currently used for policy analysis may overstate the socially optimal level of greenhouse emissions.
ii
Table of Contents Abstract ...... ii
1. Introduction ...... - 1 -
2. Further Background ...... - 4 -
2.1. Abrupt Climate Change ...... - 4 -
2.2. Uncertainty, Irreversibility and Option values ...... - 6 -
2.3. Analytic Economic Models of Environmental Catastrophe ...... - 7 -
1.1. Uncertainty in Integrated Assessment Modelling ...... - 9 -
1.2. Fat Tailed Uncertainty and the Dismal Theorem ...... - 11 -
1.3. Implementing Emissions Abatements ...... - 13 -
2. Model ...... - 15 -
2.1. Ramsey Growth ...... - 15 -
2.2. Global Warming Module ...... - 17 -
2.3. Greenhouse Gas Abatements ...... - 19 -
2.4. Stochastic Catastrophe Risk ...... - 20 -
2.5. Parameterisation, Coding and Implementation ...... - 24 -
3. Results ...... - 26 -
3.1. Optimal policy with and without catastrophe risk ...... - 26 -
3.2. Sensitivity of Results to catastrophe parameters ...... - 30 -
3.3. Optimal Policy Recovery after a catastrophe ...... - 34 -
3.4. Comparison with Nordhaus’ DICE-2007 Model ...... - 37 -
4. Conclusions ...... - 41 -
Appendices ...... I
I. Code: Model File ...... I
II. Code: Data File ...... IV
III. Table of Parameters ...... V
IV. Fitting Damage Function Parameters ...... IX
References ...... X
iii
1. Introduction
There is increasing scientific consensus that human induced climate change is causing an appreciable warming of our planet (IPCC, 2007a). In spite of this, there is still much uncertainty and ambiguity about the extent to which this is occurring, the impacts of this warming on the environment and the cost of these impacts on human activities and welfare. Further to this ambiguity, researchers are also beginning to recognise the risks of climate catastrophes or irreversible climate tipping points, which although unlikely, could cause much greater damage than average forecasts currently predict. Contingent on these risks, it is important that policies are in place to ensure that society manages its emissions of greenhouse gasses in a manner that achieves the best social outcomes.
Lord Nicolas Stern (2006) in his review of the subject describes climate change as the “biggest market failure the world has ever seen”. He says this because at its core climate change is an economic problem. More specifically climate change is what economists call an externality problem. An externality is when a person’s action (such as the emission of greenhouse gasses) imposes a cost on others (in terms of damage from climate change reducing the future welfare) which the person making the action does not bear the full cost of. Because of this, emitters do not have the incentive to behave in a socially optimal manner.
Economics has a simple solution to this problem: emitters should pay the full social cost of their emissions. This can be implemented by imposing a tax on carbon emissions, or (almost) equivalently and seemingly more likely, a tradable cap on emissions. This raises the question, what is the social cost of greenhouse emissions? Or equivalently what should be the optimal trajectory of greenhouse emissions over time?
In order to design a sensible climate policy it is necessary to know the extent to which society should reduce their greenhouse emissions to prevent climate change. Typically when scientists and environmentalists discuss global warming policy, they advocate fixed targets for climate change such as 350 ppm CO2e (McKibben, 2011) or 2o C above preindustrial levels (Meinshausen et al, 2009). They justify these targets by arguing that above these limits the costs of climate change on human activities, as well as the uncertainty about the eventual outcome are too large.
To economists, this approach is not good enough because it does not consider the costs to society of reducing greenhouse gas emissions. Money invested in technology or capital to reduce greenhouse gasses cannot be spent on consumption or other types of technological or capital investment which could increase our future welfare. A better policy would be one which trades off the costs to society of a changing climate against the costs to society of undertaking expensive abatements.
- 1 -
This cost benefit principle has given rise to the class of economic models called ‘Integrated Assessment Models’ (IAMs). These models typically combine a neo-classical economic growth model with a simplified global climate model and a choice to spend output ‘abating’ greenhouse gas emissions to mitigate global warming. In these models, a benign central planner choses a fraction of gross world product in each period to invest in capital and greenhouse gas abatement in order to maximise the present value of expected future utility.
IAMs are the standard economic tool used to find the optimal trajectory of greenhouse gas emissions. They have gained widespread use in the academic literature (Nordhaus 2008, Tol 2003, Hope 2002) as well as in governmental (Stern 2006, Mendelsohn 2004) and intergovernmental reviews (IPCC 2007c).
The main weakness of IAMs is that their results are highly sensitive to a number of crucial assumptions. These assumptions include social preferences such as discount rates and risk aversion parameters which are difficult to know or aggregate across societies with vastly different values. Other assumptions depend on the future paths of technological progress such as productivity growth and the future cost of emissions abatements. Finally some assumptions are on the behavior of the climate, for which much of the science is imperfect or still developing, and its relationship between climate and economy, where many of the impacts are also unknown. (Ackerman et al, 2009)
Although there is profound uncertainty about the best parameters for these models, few IAMs adequately take uncertainty into account. Often authors do not optimise their models under uncertainty. Instead they incorporate uncertainty outside of the optimisation, leading to qualitatively different results. If they do the incorporate uncertainty, often it is in a static, rather than a dynamic manner. As such, they do not allow optimal policies to adapt when information is learnt about the uncertain parameters. To properly incorporate uncertainty into such models requires making them more complex and difficult to solve. Because of this, model builders need to be selective to ensure that they only incorporate the most pertinent aspects of uncertainty into their model.
I develop a model which considers the possibility of a rapid and large scale change in the climate if a so called ‘climate catastrophe’ occurs. This is something that most existing IAMs fail to address (Keller et al. 2004 and Baranzini et al. 2003 are notable exceptions). The historic record shows that the earth’s climate is prone to rapid flickering changes, repeatedly oscillating in and out ‘ice ages’ for the duration of the historical record (Alley et al., 2003). Current scientific models have a poor understanding of the mechanisms by which this occurs. Because of this, current estimates of the probability of such a catastrophe occurring are limited to those derived from surveys of the subjective opinions of climate experts.
- 2 -
This dissertation attempts to replicated and extend Nordhaus’ DICE Model, one of the most cited IAMs in the literature. In the extension, there is a small risk of a one-off catastrophe occurring in each period. The impact of a catastrophe is a one off permanent 25% reduction in Gross World Product (GWP). The probability of a catastrophe occurring in any period is an increasing function of the atmospheric stock of greenhouse gasses.
The results show that when catastrophic risk is incorporated into the model, there is a more stringent trajectory of greenhouse gas emissions compared to when the risk is excluded. Incorporating catastrophe risk leads to an optimal policy with 45% lower total greenhouse gas emissions. Greenhouse gas emissions peak and are completely halted 35 years earlier. Under the assumptions in my model, failing to incorporating the risk of catastrophes into policy increases the risk of such a catastrophe occurring by 65%. This suggests that the non-stochastic the IAMs which are currently widely used for policy purposes are overstating the socially optimal trajectory of greenhouse gasses.
The remainder of this dissertation proceeds as follows; Section 2 presents an overview of the science of catastrophic climate change, surveys the economic tools available to model and estimate the optimal emission reductions under catastrophic risk and briefly discusses policies to implement such emissions reductions. Section 3 discusses the details of the model and Section 4 presents the results. Section 5 concludes with the implications of my results for current global warming policy.
- 3 -
2. Further Background
This section will give background in key areas to provide a better understanding of the context of the model which I build.
In this section, firstly I will look at the scientific evidence of the risks of a rapid catastrophic climate change. Secondly, I will discuss the role that uncertainty plays in environmental economics and then look at the literature of analytic economic models which deal with the risk of environmental catastrophe. Additional background is given on IAMs, their results and how they have attempted to deal with uncertainty and catastrophe. I discuss Weitzman’s dismal theorem which argues that the uncertainty around climate change is so large that IAMs should focus more on unlikely catastrophes than on the most likely outcome. Finally there is a brief discussion of practical implementation of the emissions reductions prescribed by IAMs.
2.1. Abrupt Climate Change
Paleoclimatic studies show the earth’s climate has historically been highly unstable and prone to rapid shifts. These shifts are best exemplified by the historic oscillations of the earth’s climate into and out of ice ages, which take place over an extended timescale of hundreds or even thousands of years. More recently researchers have found evidence of historic patterns of frequent climate flickering, where the climate can change rapidly over the course of only a few years or decades (Hall and Behl, 2006). The temperature changes during such fluctuations are estimated to be between half and a third as large as the change between the last ice age and today (up to 7oC in mid latitude regions).
This has led many researchers to consider the prospect of ‘abrupt climate change’. The US National Research Council defines abrupt climate change as occurring “when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the system itself and much faster than the cause” (Alley et al, 2003).
To explain these climate flickers scientists have used a variety of old and new explanations. An old explanation being used to explain flickers is the shutdown of the North Atlantic Thermohaline circulation (Hall and Behl, 2006). This is a deep sea conveyer belt which moves warm water from the tropics up to the higher latitudes and is responsible for warming Europe by several degrees Celsius. A rapid shutdown of this circulation would induce rapid cooling and glaciation throughout Europe and North America. Through a variety of feedback mechanisms, such as greater albedo from increased ice cover reflecting away more radiation, this would lead to widespread cooling around the globe.
A more novel explanation for this process is the so-called “Clathrate Gun Hypothesis” (Kennett et al., 2003). This theory postulates that a small warming of the oceans at an intermediate-depth (400-
- 4 -
1000m) could trigger a rapid release of methane-hydrates stored in the ocean floors. Methane is a powerful greenhouse gas and its release on such a large scale would trigger run away global warming.
To consider the risks of such climate instability, it is important to know firstly exactly how these mechanisms detailed above would occur, secondly the likelihood and possible thresholds for triggering these mechanisms, and thirdly the impact that the following events would have on our climate and economy. There is currently insufficient data and insufficient knowledge to accurately model the underlying processes. As a result, current state of the art scientific models cannot adequately quantify neither the likelihood of these mechanisms occurring nor the consequences of the resulting nonlinear change in climate (Kreigler et al., 2009).
Estimates of the likelihood and impacts of such catastrophic events are limited to surveys of the subjective opinions of experts of these subjects. Lenton et al. (2008) discuss nine key tipping elements in the earth’s climate. They conduct a survey a panel of experts to give the threats a relative ranking of their sensitivity and uncertainty. The survey finds five of the nine elements to be most pertinent. Of these five the melting of the Greenland Ice Sheet is evaluated to be the most sensitive and least uncertain. The experts assess the collapse of the Thermohaline circulation to be the least sensitive but in the middle in terms of uncertainty. The impact of global warming on the West Antarctic Ice Sheet, Amazon Rainforest and The El Nino Southern Oscillation were ranked the most uncertain.
Kriegler et al. (2009) similarly use subjective estimates of a panel of experts to estimate quantitative probabilities of five different tipping points (including most of the ones mentioned above). The experts also predicted the interaction between the tipping points (for example, how much more likely would rapid dieback of the Amazon be if El Nino patterns were to suddenly intensify). Kreigler et al. used this to calculate cumulative probabilities of these catastrophes occurring.
The data from both surveys show a very large range of opinions and little agreement within the panels. Because their methods aggregate the subjective opinions of experts rather than peer reviewed models, these estimates may be subject biases, or other systematic errors. For example, an expert who spends much of his time thinking about a certain scenario in the absence of objective data may biasedly believe that this scenario is more likely to happen than it really is.
Kriegler et al. estimate that the joint probability of at least one of their tipping points being reached is at least a 0.16 percent on a moderate emissions trajectory and a 0.56 percent chance if we stay on a high emissions trajectory. These estimates are used as the basis for the catastrophe probability function constructed later in this dissertation.
A final issue to consider for the purpose of this dissertation, is whether abrupt climate change is a tipping point phenomenon where there is a non-random but unknown threshold at which a catastrophe
- 5 - will occur, or whether it is a stochastic phenomenon where a catastrophe is possible at any time however it is more likely the more we perturb the climate.
If we consider a relatively certain tipping element such as the melting of the Greenland Ice sheet, then it would make sense to look at this as a tipping point phenomenon. Although the mechanics of ice sheet melt are not well understood, the size and physical mechanics of ice sheet melt suggest that there will be some critical threshold after which a catastrophe becomes inevitable.
However if we look at the scientific literature of ‘climate flickering’ there appears to be no set threshold for any observable variable at which the climate frequently changes (Hall and Behl, 2006). Changes in historic climate appear to be unpredictable and chaotic in nature. It is for this reason I have chosen to model the climate catastrophes as a random process as opposed to one with an unknown threshold.
2.2. Uncertainty, Irreversibility and Option values
As we have seen, the science of global warming is full of uncertainty, not only about the incremental costs of global warming but also about the risks of random flickers in climate. In addition to these uncertainties two more uncertainties should be considered (Heal and Kristrom, 2003). Firstly, there is economic uncertainty; the relationship between the environmental impacts of climate change and human welfare. As an example, there is much economic uncertainty about the impact of sea level rise on patterns of migration, production and human welfare in coastal regions. Secondly there is policy uncertainty, that is to say the uncertainty between any given policy instrument and its outcomes. For example what level of carbon tax would be required to reduce emissions by 5 or 10 or 20 percent?
Uncertainty has several implications for IAMs. Firstly the distribution of many of the parameters associated with climate change is asymmetric, there are long heavy tails in the sense that there is a small probability of something highly costly happening. Because of this, the expected cost of a policy may differ greatly from the most likely cost. In the case of global warming where there is a large tail of unlikely but highly damaging policies. This would imply higher expected costs of global warming and so a more conservative emissions policy. This is important for analysing IAMs which are often parameterised by such best estimates rather than expected values.
Secondly, it is usually assumed that society is risk averse, and as such is willing to pay a risk premium to avoid uncertain outcomes even if the expected costs are the same. This is important because the uncertainty about the impacts of climate change grows as carbon stock increases, so we should be willing to pay more to avoid climate change than expected damages alone would imply.
- 6 -
Thirdly, uncertainty can combine with irreversibility (or temporal rigidity) to give rise to option values. It takes a long time for carbon to cycle through the atmosphere. Under Nordhaus’ specifications a perturbation in atmospheric carbon will take over 500 years to die away (Joos et al., 1999). As such the impact of emissions is irreversible in the short term (unless we can develop technologies to sequester atmospheric CO2).
To consider the impact of irreversibility, assume there is stock good with a non-random threshold above which a catastrophe will occur. If a catastrophe is accidentally triggered and the stock is reversible, then it can be reduced back below the threshold and society will only have to pay the costs of a one off catastrophe for a short period. However, if we assume that this catastrophe is irreversible, then the catastrophe will impose on-going costs indefinitely into the future.
If there is no uncertainty, then irreversibility is not important because an optimal decision can be made a priori. But in the presence of uncertainty a more conservative approach is optimal than in the presence of perfect information. This result suggests a kind of ‘precautionary principle’ where exposure should be reduced to risks that may later be regretted. The difference between the cost of the optimal policy with and without uncertainty is called the real option value.
One of the benefits of incorporating randomness into models like IAMs which are dynamic over time is that they can account for this option value since they find the optimal policy with respect to all possible future costs.
There is also a second option value associated with climate change which IAMs are weaker at modelling. This arises because sunk investments in technology and capital to abate emissions must be made today, but the true impact of climate change is uncertain. If over time more is learnt about the true impacts of climate change, then there is an option value associated with postponing investment in abatements until there is certainty about their necessity (later we see this option value captured in Guillermient and Tol (2008)’s model).
2.3. Analytic Economic Models of Environmental Catastrophe
There is a large literature of analytic economic models which use optimal control theory to characterise policy for controlling pollution flows. Heal and Krisrtom (2007) survey this literature. Some of these models consider the impacts of catastrophic risk on optimal pollution policies. I will review two such models, Clarke and Reed (1994) and Tsur and Zemel (1994). These two models illustrate the difference between viewing environmental catastrophe risk as a tipping point phenomena or a stochastic process.
- 7 -
In both models, the total stock of an emitted pollutant causes damage in two ways. Firstly, there is a known continuous damage where increased stock of pollutants causes increased environmental damage. Secondly, both models include environmental catastrophes which if realised will permanently reduce economic welfare. In Clarke and Reed there is a small probability that a catastrophe will occur at any time and this probability increases as the pollution stock increases. In Tsur and Zemel the catastrophe is non-stochastic and has an unknown threshold at which an environmental catastrophe will occur if it is ever exceeded.
In Clarke and Reed the optimal policy contains a unique equilibrium stock of pollutants (in both these models there is natural sequestration of pollutants so an equilibrium stock does not imply zero emissions). This equilibrium stock will be lower in the presence of catastrophic risk if some increasing function of both the marginal hazard rate of increasing pollution stock and the consequences of a catastrophe are sufficiently large.
On the other hand, in Tsur and Zemel the equilibrium level of pollution stock is an interval rather than a unique value. The upper bound of this interval is the pollution stock which is optimal in the absence of catastrophic risk. If pollution stock starts below this interval, the pollution stock will increase until it reaches the bottom of this interval. If pollution starts inside the interval, then the optimal policy will leave pollution stock unchanged. If the pollution stock starts above the interval, pollution will decrease until it reaches the top of this interval.
The intuition behind this is that normally it would be optimal to increase the pollution stock if the marginal benefits outweigh the expected marginal costs (both in terms of marginal climate damage and marginal risk of catastrophe). However if we have previously experienced a stock at a particular level above this point and no catastrophe has occurred, then we have learnt that the catastrophe threshold must lie above this point. As a result, if pollution stock is below its highest historical level, the marginal costs of increasing the stock are reduced (since there is no risk of catastrophe). If pollution stock is above its highest historical level, the marginal costs of increasing the stock are increased (as all of the probability has been pushed into the tail of the distribution above this point). Because of this, different policies may be optimal depending on the pollution history.
However, if the risk is truly stochastic in nature (as in Clarke and Reed), then just because a catastrophe did not occur when a certain level of pollution was experienced in the past, does not mean that a catastrophe will not happen if we do the same thing again.
This example shows that if the climate is non-stochastic and uncertain then there is potential to learn about the risks involved and better manage them. In this example the learning come from historical experience of the climate, but could also come from other sources such as scientific research (as we
- 8 - will see in Weitzman(2008)). On the other hand if the climate is truly stochastic then the ability to learn is more limited.
Unfortunately, the usefulness of analytic models is limited by the simplifying assumptions necessary in order to be tractable. These models do not fully account for the opportunity costs of abatement in terms of capital accumulation and future growth. To model these dynamics, such models were combined with macroeconomic growth models in what is called ‘Integrated Assessment’. However, because these models are much more complex they cannot be solved analytically.
1.1. Uncertainty in Integrated Assessment Modelling
Computational IAMs allow for a greater degree of detail and realism to be incorporated into economic models. Multi-equation models of the climate and the economy can be used to make the simulations more similar to the real world. However, having a more complex model means there are more assumptions. As we make more assumptions, the model becomes more complex and it often becomes more opaque. In a complex model, it is often unclear what elements of the model are important, what the results depend on and how the various assumptions in the model interact.
Broadly speaking, there are two types of assumptions in IAMs. Firstly, there are structural assumptions, they are the equations which govern the model. I will discuss this in further detail when I outline my model in section 3. Secondly, there is the choice of the parameters in these equations. The choice of parameters in IAMs is a contentious issue. IAMs are highly sensitive to which parameters which are chosen. Further, many of the parameters used in such a simulation are profoundly uncertain.
Many of the geophysical parameters can only be imperfectly measured. They may also pertain to mechanisms which are not scientifically well understood. Other parameters, such as levels of risk aversion and social discount rates refer to social values which differ around the world. Currently, there is no agreed way to aggregate these social preferences across societies. Some parameters, such as productivity growth, make assumptions about variables hundreds of years into the future which cannot be meaningfully estimated. Lastly, many of the parameters themselves may be endogenous to the choice of climate policy. For example, changes over time in the cost of abatement or the emissions intensity of output depend strongly on the incentives to develop innovation in these areas.
It is important that parameters are chosen as accurately as possible in an objective and transparent manner. Often, the policy recommendation that arises from these models can be completely changed with a small tweak of the parameters.
- 9 -
For example, compare the outcomes of the IAMs presented by Nordhaus (2008) and Stern (2007). The difference between policy recommendations between these two authors is huge. Stern’s optimal policy advocates abating 60% of emissions by 2050, whereas Nordhaus’ policy only abates 25%. Both authors use similar IAMs. This large difference arises because of differences in two key parameters, the discount rate and the coefficient of risk aversion. I discuss these choices in more detail in Section 3.1.
Nordhaus (2008) and many IAMs (but notably not Stern (2007)), come to the key conclusion that the optimal policy will involves a so called ‘policy ramp’. That is to say, that the optimal policy will involve modest cuts in greenhouse gas emissions in the short term, followed by more drastic ones in the medium and long term (Nordhaus, 2008). The intuition behind this is that abating emissions today will have a larger opportunity cost since it will cost us a larger proportion of our output and prevent us from investing in capital which will help us grow in the future. Whereas in the future when society is richer, paying for emissions abatements is relatively less onerous. This ‘policy ramp’ is in contrast to the policies advocated by many environmentalists and scientists who advocate sharp cuts in emissions today, failing to consider the opportunity cost of abating greenhouse gasses.
IAMs currently used by economists are becoming increasingly complicated. I base my model on one of the simplest IAMs, Nordhaus’ DICE-2007 model. Other IAMs often include extra details in their models such as regional disaggregation (RICE-2010 Model, Nordhaus 2010), details of Industrial structures (Kemfert 2002), or detailed modelling of each individual cost of posed by climate change (FUND Model, Tol 2003). These models may be useful for studying or clarifying individual issues such as international negotiations, industry policies or the distribution of the impacts of climate change. However, it is important that big picture policy models such as DICE be parsimonious, so that it is clear what parts of the model are driving the conclusions and policy outcomes.
Usually, the analysis of uncertainty in many IAMs is limited to simulating different scenarios and using sensitivity analysis. Sensitivity analysis involves testing the model with different parameter values and observing how sensitive the optimal solution is to these changes. Some authors such as Stern (2006) and Hope (2002) try to account for uncertainty using Monte Carlo analysis. They run their optimisations independently very many times, each time with different parameters and generate a continuous distribution of outcomes. Although this technique accounts for some of the impact of uncertainty, it cannot account for all of it. For example, since each optimisation runs independently with different fixed parameter, it does not account for the impact of risk aversion.
Assessing the impact of risk aversion involves optimising the expectation over many scenarios. This is the approach that Tol’s FUND model takes. It generates a smaller number of realisations and then finds an emissions trajectory which maximises the weighted average utility over all of the samples.
- 10 -
This approach accounts for risk aversion. However, its weakness is that it is not dynamic. It cannot account for the ability of agents over time to learn or infer information from past climate damages which realisations of the parameters are more likely. If agents can learn over time, then they can adjust their behaviour accordingly improving future welfare.
Another problem with these models is that they only consider uncertainly in terms of a continuous distribution of outcomes. There is no discrete probability of a climate catastrophe occurring. There are some models which do incorporate explicit catastrophe risks into their models. Keller et al. (2004) incorporate a threshold for overturning the Thermohaline circulation in an older version of Nordhaus’ DICE model. Guillerminet and Tol (2008) produce a decision tree model, where every period a decision is made regarding whether to undertake a drastic regime of greenhouse gas abatements. The structure of this model effectively captures the learning and dynamic decision making process associated with a catastrophic tipping point. Both models reach similar conclusions that the existence of catastrophic risk significantly brings forward the timing of the optimal policy (in Keller et al. it also increases policy intensity). They found this was especially true when the parameters controlling the likelihood and impact of the catastrophe are large or the agent is particularly risk averse. Guillerminet and Tol also find that once a catastrophe becomes inevitable, the incentive to reduce emissions is significantly reduced.
Both of the previous models include non-stochastic but uncertain tipping points rather than truly stochastic catastrophes. On the other hand, Baranzini et al. (2003) model stochastic catastrophes. Baranzini et al. produce a real option model to estimate the benefits of undertaking global warming policy. They model catastrophes as a Poisson process. One weakness of their approach is that they do not link the probability of catastrophe to the degree of global warming. They simply say that if no action is taken then there will be random catastrophes and once action is undertaken there will not. Baranzini et al. estimate that if catastrophes (with $100 billion dollar annual expected cost) are modelled, then there is a 72% probability that the expected benefits of undertaking emissions mitigation will exceed the expected costs. If such catastrophes are excluded then there is only a 16% change that the expected costs will exceed the expected benefits.
1.2. Fat Tailed Uncertainty and the Dismal Theorem
Weitzman (2009) argues the uncertainty in the tails of the probability distributions which parameterise IAMs can be so large that it dominates all other aspects of the analysis. He demonstrates this with what he calls ‘The Dismal Theorem’ which I will briefly summarise:
He begins with a two period model. In between these two periods there are climate related damages which will impact on welfare. This damage has a known distribution, but unknown parameters. Starting from non-informative priors beliefs, we can learn about these parameters by conducting
- 11 - scientific studies and updating our beliefs in a Bayesian manner. The rate of learning is slow and so the meta-distribution of the damages is fat tailed (this is the distribution of the damages after the posterior distribution of the unknown parameter is incorporated). The expected disutility of the uncertain damage becomes arbitrarily large, because the utility tends to negative infinity when consumption is near zero (for a constant relative risk aversion utility function).
The implication is that under these conditions there is an unbounded willingness to pay to avoid the risk of these climatic damages.
Weitzman’s model combines the two types of uncertainty I have been discussing. There is a stochastic element in the sense that the climate damage between the two periods is randomly distributed. There is also a non-stochastic but unknown element, the parameters which govern the distribution of the climate damages. What Weitzman shows is that we can never learn away the non-stochastic parameter fast enough to keep the variance of the stochastic damage finite.
Nordhaus (2009) criticises Weitzman’s paper because it leans heavily on the asymptotic behaviour of constant relative risk aversion utility functions as consumption goes to zero. Nordhaus argues that this is more a convenient assumption than realistic description of societies’ attitudes to risk. Further, he argues that the theorem can only be applied in situations where there is actually a risk that consumption will be near zero. He discusses how catastrophic climate change may not be one of these cases because even under the most catastrophic scenario there no risk of extinction.
Weitzman argues that the implication of the Dismal Theorem is that there should be a burden of proof on those who make policy prescriptions using IAMs to show that this tail behaviour is not relevant to their policy conclusions. This is one of my motivations for building an IAM which incorporates the risk of catastrophes.
Anthoff and Tol (2011) discuss the implications of the Dismal Theorem on their particular integrated assessment model FUND. In FUND there are 150 parameters which are randomly generated during Monte Carlo analysis. Randomly generating all these parameters, they cannot statistically reject the hypothesis that distributions of the damages in FUND are fat tailed. In their model this can lead to situations where the effective discount rate for climate change is negative and the problem is unbounded. Anthoff and Tol argue that the Dismal Theorem implies that another welfare criterion is needed. They re-solve the FUND model with an objective to minimise the thicknesses of the tail of the distribution of consumption (a ‘minimax’ policy). The optimal policy he finds when he minimises tail thickness is a similar order of magnitude to when its objective is to maximise consumption. As such he argues the dismal theorem does not destroy the applicability of IAMs to climate change.
- 12 -
1.3. Implementing Emissions Abatements
Most IAMs assume a benign central planner who can perfectly control the world’s emissions and does so to maximise global welfare in a utilitarian manner. In practice, of course, this is not how the world works. This is demonstrated by global policy makers who failed to come to a binding agreement to limit the world’s greenhouse gasses in Copenhagen and look unlikely to do so in the near future.
In this regard, it is important to see the limitations of integrated assessment. IAMs should be seen as a baseline against which global policies can be compared. Evidence from regionalised IAMs (Tol 2002, Nordhaus 2010) suggests that climate change will impact different countries in different ways. Those who are the most at risk are often not those who contribute the most to global emissions. This has lead Tol (2002) to comment that climate change is “essentially a problem of distributional justice”. IAM find the most efficient trajectory of emissions. They cannot find policies which are feasible for all parties at international negotiations. They also cannot tell us what polices are philosophically just or unjust. Yet today these are the largest issues currently standing in the way of a global agreement on greenhouse emissions.
A different policy problem which IAMs cannot settle is the issue of deciding which policies are best to reduce greenhouse gas emissions within nation states. Most of the economic debate in this area centres on Weitzman’s (1974) theory of prices versus quantities. In the absence of uncertainty, it is equivalent to regulate pollution by either setting a price on emissions or a cap on their quantities. However, when policy makers are uncertain about firms’ costs of abatement, then prices regulation is preferred to quantities regulation if the slope of the marginal cost of abatement is greater than the slope of the marginal benefits. In the case of climate change the marginal benefits of abatements are relatively flat because greenhouse gas is a stock good, and a marginal emission makes very little difference to the total stock. This would seem to suggest that prices (i.e. a carbon tax) are a better policy to quantities (a tradable rights scheme). This result has been confirmed using simulations of a modified DICE model (Pizer, 2003).
In practice this argument is relatively academic. In many countries (including New Zealand), cap and trade schemes have been implemented after a carbon taxes proved politically untenable. Subsequent research has showed that the advantage of prices over quantities can be minimised by using a hybrid tradable rights scheme with a cap on prices (Weitzman, 1978), or by using a scheme where emission rights are bankable over time (Fell et al, 2009).
Other countries such as the United States, neither a carbon tax nor a tradable rights scheme are politically viable. As such, regulation and subsidies are the only tools remaining to policymakers. Performance Standards and Research and Development subsidies are generally spurned by economists as being inefficient ways to deal with such problems. This is because it assumes that policy makers
- 13 - have perfect information about the costs of inventors and firms. Stern (2006) however, argues that such policies are vital in transitioning towards a low carbon economy because they can target specific areas to drive innovation. Stern argues this is important because of the path dependence of technological change, an issues that economic models have trouble quantifying.
- 14 -
2. Model
This section gives the details of a computational IAM. In this IAM, every period there is a small probability of a catastrophe occurring, which will cause a large reduction in output (GWP) in every period after the catastrophe occurs. The probability of the catastrophe occurring is an increasing function of greenhouse gas stocks. Once a catastrophe has occurred there is no further chance of another catastrophe occurring.
I will present the model in four steps, as this the simplest way to show how the model works. It is also the order in which the model was developed, starting simple and building it more complex.
Firstly, I start with a basic Ramsey economic growth model on which the IAM is based. Secondly, I add onto this model a climate sector in which production of output causes greenhouse gas emissions. These emissions raise global temperature, which causes economic damage, which will in turn reduce future output. Thirdly I add an abatement industry where policy makers choose to spend a fraction of their output on abating their emissions reducing future global warming.
The first three parts of my model are based on the description of the DICE-2007 model given in A question of balance (Nordhaus, 2008), which I have followed closely (except in a few places where noted).
Finally, the novel part of the model is a random tipping point risk. There are many states of the world in each period with and without climate catastrophes. The transition probabilities between these states are governed by the concentration of atmospheric greenhouse gasses.
2.1. Ramsey Growth
We start with a basic Ramsey growth model. There is a single global decision maker who must choose the saving rate �(�) in each period into the future to maximise the present value of an individual’s future utility �(�). The saving rate �(�) is the fraction of income that is invested into future capital rather than consumed in every period. Thus investment �(�) = �(�)�(�).
The utility �(�) of a representative member of society depends on individual consumption �(�) = �(�)/�(�) in that period and is given by a Constant Relative Risk Aversion (CRRA) Von-Neumann