Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics

An application of Real Options to the valuation of an investment in electrical network

A thesis submitted to the Delft Institute of Applied Mathematics in partial fulfillment of the requirements

for the degree

MASTER OF SCIENCE in APPLIED MATHEMATICS

by

Nikita Moriakov

Delft, the July 2012

MSc THESIS APPLIED MATHEMATICS

“An application of Real Options to the valuation of an investment in electrical networks”

NIKITA MORIAKOV

Delft University of Technology

Daily supervisor Responsible professor

Dr. D. Kurowicka Prof. Dr. F.H.J. Redig

Other thesis committee members

MSc Bas Peppelman Dr. S.A. Borovkova

Dr. Ir. J.H.M. Anderluh

July 2012 Delft, the Netherlands

Contents

1 Introduction 11 1.1 Problem Statement ...... 11 1.2 Developments ...... 12 1.3 Network ...... 14 1.4 Costs and Profits ...... 17 1.5 Organization of the thesis ...... 18

2 Tools for decision problems 21 2.1 Small Example ...... 21 2.2 Decision Trees ...... 22 2.2.1 Utility Functions ...... 22 2.2.2 Decision tree for the small example ...... 25 2.2.3 Shortcomings ...... 27 2.3 Real Options Approach ...... 27 2.3.1 Idea of a Cash Flow and its Present Value ...... 27 2.3.2 Markets and Related Assumptions ...... 29 2.3.3 Capital Asset Pricing Model ...... 30 2.3.4 Classical Way of Real Option Valuation ...... 35 2.3.5 Classical Replicating Portfolio Example ...... 35 2.3.6 Real Options for the simplified example ...... 37 2.3.7 Pros and Cons of Real option analysis ...... 38

3 Formal Description of the Problem and the Suggested Approach 39 3.1 Market Return and the AEX Index ...... 39 3.2 Conditional probabilities of developments given history of AEX ...... 41 3.3 Developments ...... 45 3.4 Extensions, Investments ...... 46 3.5 Suggested Approach ...... 48 3.6 Dynamic Programming Valuation for the small example ...... 51

4 Results 57 4.1 Program ...... 57 4.2 Program Output and Discussion of the Results ...... 63 4.2.1 Area of Rijsenhout ...... 63 4.2.2 Houses ...... 66

5 Conclusion 67

5 6 List of Figures

1.1 Area overview ...... 12 1.2 Developments in the area ...... 14 1.3 Developments and existing grid nodes ...... 15 1.4 Initial investment ...... 16 1.5 First extension ...... 17 1.6 Second extension ...... 18 1.7 Third extension ...... 19 1.8 Housing extension ...... 19

2.1 Utility function of risk-averse individual...... 23 2.2 Utility function of risk-seeking individual...... 23 2.3 Utility function of risk-neutral individual...... 24 2.4 Exponential utility function ...... 24 2.5 Shortened version of the decision tree...... 26 2.6 Subtree for utilities...... 27 2.7 Feasible region in the Mean-σ plane, no short-selling...... 31 2.8 Indifference curves for risk-neutral investor in the Mean-σ plane...... 32 2.9 Feasible region and indifference curves in the Mean-σ plane...... 33 2.10 Feasible region and efficient frontier in the Mean-σ plane ...... 34 2.11 Decision tree ...... 36 2.12 Tree For Replicating Portfolio Example ...... 37

3.1 Q-Q plot for annual log-returns...... 41 3.2 AEX history...... 42 3.3 Annual returns...... 43 3.4 Binomial approximation to annual rate of return...... 43 3.5 V0.6,0.4 ...... 44 3.6 V0.6,0.5 ...... 44 3.7 U0.7,0.5,0.1 ...... 45 3.8 U0.7,0.5,0.5 ...... 45 3.9 Transition diagram for 1st state variable...... 48 3.10 Transition diagram for 2nd state variable...... 48 3.11 Conditional distributions and market return...... 52 3.12 Tree Structure for Dynamic Programming ...... 53

4.1 Node structure in the tree...... 58 4.2 Class hierarchy for tree.hh library...... 59 4.3 Possible data splitting for parallel algorithm ...... 62

7 8 List of Tables

2.1 Probabilities of ‘step up’ ...... 21 2.2 Cash flows of the project and of the twin security ...... 36

3.1 Summary of normality tests for annual log-returns...... 40 3.2 Answers and estimated parameters ...... 46 3.3 Description of the first state variable ...... 47 3.4 Description of the second state variable...... 47 3.5 Possible transitions for 1st state variable with their costs and time...... 49 3.6 Possible transitions for 2nd state variable with their costs and time. . . . . 49 3.7 Present Values at the end of the 1st year ...... 54

4.1 Total number of nodes in the RSH tree...... 63 4.2 18 years, ext. step 4 years ...... 64 4.3 16 years, ext. step 4 years ...... 65 4.4 16 years, ext. step 2 years ...... 65 4.5 18 years, ext. step 4 years, reduced probabilities...... 65 4.6 18 years, ext. step 4 years, Alternative parameters for rm ...... 66 4.7 16 years, ext. step 4 years, Monte Carlo approach ...... 66

9 10 Chapter 1

Introduction

1.1 Problem Statement

Since the 70s, a steady development is observed in the area of Haarlemmermeer. This has led to significant investments in traditional 150/50/10kV-network of Liander, the company providing electricity in the area. Developments in the Haarlemmermeer will steadily continue; since the turn of the century there are plans designed to:

• develop a quality housing west of Hoofddorp and Nieuw Vennep, including com- plexes with heat pumps;

• develop office and industrial sites east of Hoofddorp and Nieuw Vennep;

• create several large industrial logistics (Schiphol Logistics Park and ACT),

• data centers in PrimAvierA,

• large-scale greenhouses near Rijsenhout and

• one or more wind farms along the A4 and A44 on the south side of the polder.

An overview of the area can be seen in Figure 1.1. The legislative regulations oblige Liander to provide the necessary capacity to con- sumers, but the current electrical network configuration in the area of Harlemmermeer does not meet the required bandwidth to fulfill future demand. Some investments are necessary but their extent will depend on extent to which the developments will occur. This obviously cannot be predicted with absolute certainty. Liander’s experts expect that some of the developments are dependent on general economic welfare - that is, their occurrence is more likely in case of good economic times. Other developments will not show strong dependency with economy growth. To fulfill their obligations in the future Liander can make some investments now gaining in construction time and possibly avoid- ing some penalties. On the other hand investments are costly and unnecessary spending should be avoided. All together, this constitutes a nontrivial decision problem; and there’re quite a few ways to approach it. The most ‘classical’ one relies on decision trees where all possible

11 Figure 1.1: Area overview scenarios are quantified and the preferred one is the one which maximizes expected util- ity. The newer approach suggests using so-called ‘Real Option Valuation’, with roots in valuation techniques for financial assets which can also be applied to decision problems. Our goal in this projects is to explore this new technique and its appropriates to solve the Liander’s investments problem in the area of Haarlemmermeer. We will first briefly describe the developments that are expected to occur in this area and estimate the capacity that they can require. Then the available network and its possible extensions will be introduced. Moreover we will discuss the costs needed for each possible extension of the network, profits that Liander can expect in case the developments occur and the electricity can be provided as well as penalties that the company will face when there is not enough capacity to fulfill all obligations.

1.2 Developments

The following developments can be expected in the area of Harlemmermeer:

• Houses Approximately 10000 homes are planned to be built in the western flank of the Haarlemmermeer. Based on traditional average consumption, the total connection capacity of all ex- pected houses is equal to 10MVA.

• Offices There’re some existing office parks and ongoing developments in Beukenhorst Pa. of approximately 120000m2. That translates eventually to a net load of 6MVA.

12 • Wind farm A wind farm of about 5MW is planned to be built near the junction of the A4 and A44. The municipality has the southern part of the Haarlemmermeer intended as the location. It is not excluded that in future more windmills will appear. Estimated capacity requirement for this wind farm is 15MVA. • Greenhouses Approximately 200 hectares of net area with planned 400 kW average voltage per hectare gives a total connected capacity of approximately 80MVA to realize till 2019; further growth is also possible. Municipality and province have already started the construction of roads and two exits on the A4 to make the area accessible. Total estimated capacity requirement for greenhouses is 80-120MVA. • Logistics area The land company is currently developing The President (70ha), where is an order from a Cisco of connection capacity 5MVA. Other areas under development include Spoorzicht, Schiphol and Amsterdam Connecting Trade Logistics Park. (Joint 94 ha). Total estimated capacity requirement is 15MVA. • Data center This area is near Amsterdam and Schiphol Airport hence near data backbone, so it is interesting for further developments of data centers. Fast growth of information requests and applications in recent years makes it quite likely that development of this sort can occur here. Indeed InterXion - a European provider of data centre services - already has two centers in the area and investigates the possibility of the third one. Parthenon - another company designing, building and operating data centres - has far-reaching plans for a data center in the PrimAviera area. This kind of customers generally require capacity of multiples of 10MVA. Liander does not have yet any specific orders, but the expectations are high that this kind of customers will be of importance in the near future. Data centers are huge customers and they characterized by short delivery times, so it’s necessary to be ready in time. Total estimated capacity requirement of data centers is 65MVA.

In Figures 1.2 and 1.3 map of developments in the area map and the schematic representation of existing nodes of power grid are shown. As mentioned before the Liander’s experts feel that the likelihood of developments’ occurrence depends on the health of the Dutch economy. One of the main benchmarks for ‘general economic welfare’ in Netherlands is the AEX index. AEX index is the main index in Netherlands, and it is calculated as a weighted combination of prices of ‘important’ . To model the dependency of developments on general economic welfare we’re going to introduce conditional probabilities of developments given the recent history of AEX. Moreover given AEX history together with our network configuration we assume that developments are independent from each other.

13 Figure 1.2: Developments in the area

We assume that developments progress in steps. That is, some developments are built up to 100% immediately if they’re are developed at all. The others are developed to, say, 50% during the first ‘good’ two-year period and to 100% during the second two-year period. We postpone the formal description of the our problem till the final chapters.

1.3 Network

The current capacity of the network is insufficient for future developments. Only 42 MVA of residual capacity is available in the area of Rijsenhout, and the existing network sometimes operates at its limits. Liander is considering the following main investment plan, which is appealing for quite a few reasons. The main feature of the investment plan is to build a new 20kV station in Rijsenhout (denoted by RSH in the maps) and 20kV cables to connect it to the central hub in Haarlemmermeer (denoted by HMM in the maps). First of all, the initial investment has to be made. In fact, it’s already being built, so we cannot postpone or cancel it:

• 2 x 80MVA transformers are built in Haarlemmermeer parallel to existing trans- former fields; expansion of 150kV grid

• New 20kV-station is built in Rijsenhout, comprised by 24 km cables from HMM to RSH and 12 km 20kV ring

• Cost is 12.5 M EU, construction time is 2.5 years

14 Figure 1.3: Developments and existing grid nodes

• Additional capacity created in RSH 20 MVA In Figure 1.4 this initial investment is displayed schematically. HMM corresponds to the central hub in Haarlemmermeer, where 2 transforms of 80 MVA are depictied with interlapping circles. RSH correspond to the new grid node in the Rijsenhout area, with 3 lines from Haarlemmermeer to Rijsenhout indicating 3 cable lines of total length 24 km. 20kV ring is depicted in the figure as a loop intersecting RSH grid node. After the initial constructions are complete, there’s an option to perform the 1st network extension: • 2km 20kV ring is created in RSH

• Cost is 0.6 M EU, construction time 1 year

• Additional capacity created in RSH 20 MVA This investment is schematically displayed in Figure 1.5, now with two rings in the RSH area. If the 1st extensions is completed and insufficient, there’s a possibility to perform the 2nd network extension: • 8 km cables from HMM to RSH, 4km 20kV ring in RSH

• Cost is 2 M EU, construction time 1 year

• Additional capacity created in RSH 20 MVA We can see this investment in Figure 1.6; now there’s an additional cable line and a new ring in RSH area. The following 3rd network extension can be performed as soon as the 2nd is con- structed:

15 Figure 1.4: Initial investment

• 8km cables from HMM to RSH, 6km 20 kV ring in RSH • Cost is 2.3 M EU, construction time 1 year • Additional capacity created in RSH 20 MVA

Figure 1.7 presents schematic picture of this investment. Since the houses are going to be located quite far from the other developments the extension in this area is quite flexible and can be performed parallel to all the options listed above:

• 12 km 20kV ring directly from HMM station • Cost is 2 M EU, construction time 1 year • Capacity created in the housing area 20 MVA

Similarly, this investment is schematically displayed in Figure 1.8. The new ring is found in the left side of the picture. Since the option of building housing extension is parallel to building everything else in Rijsenhout, the Rijsenhout part of investment is displayed with dashed lines. An important feature of our extension plan is the independence of houses developed near Harlemmermeer from all the other developments, developed in the area of RSH. Houses take power from HMM directly, whereas the other developments take power from RSH. Furthermore, area of RSH has much higher capacity demands; thus we will some- times concentrate on the part of the problem for RSH area when providing and discussing results.

16 Figure 1.5: First extension

To deal with long lasting luck of capacity we assume that if all extensions in RSH are already built and we are still unable to provide necessary capacity in that area, then some emergency plan will be put to action to install necessary cables, transformers, etc. as soon as possible. As the result, we can have • 130 MVA additional capacity in RSH • Cost is 30 M EU, construction time < 1 year This emergency plan is simply the description of the way company would operate in such situation, hence it is triggered automatically if we run into such problems.

1.4 Costs and Profits

In the end decisions are made based costs. In our problem there are following costs: • Construction costs • Perpetual profit, coming from households and other consumers. Profits from con- sumers will be calculated as 1 MVA = 75000 EUR per year. • Some penalty in case that we fail to provide necessary capacity in time. Penalty is based on the real fines Liander has to pay in case of failure, and failure to provide capacity = 100 M EUR per year. • 5% of the capacity is used by greenhouses for consumption and gives us the standard consumer-profit per MVA

17 Figure 1.6: Second extension

• Windmills bring no profit at all.

These costs have been provided by Liander. Penalties here are the ‘real’ fines issued by the government. Sometimes there’s already certain ‘risk-aversion’ bundled with the penalties provided in so-called ‘risk matrices’, this is not the case here. Particular feature of the profits is that Liander does not receive any annual profit from producers of the electricity for providing the capacity according to certain regulations, annual profit in our case is coming solely from the consumers.

1.5 Organization of the thesis

This thesis is organized as follows: In the second chapter we will introduce the classical decision analysis tools and the theory of real options. They will be explained with help of simple small examples and we will discuss the pros and cons of these methods. In the third chapter we will formally state our problem and describe our suggested approach. Since the full problem is quite large we will provide explanations only on the simplified, small example. The same approach used to solve a small problem will be applied to obtain results of this thesis which will be discussed in the fourth chapter. To obtain these results special software has been written. Finally in the last chapter we present conclusions and suggest some improvements. Appendix contains the source code of the program.

18 Figure 1.7: Third extension

Figure 1.8: Housing extension

19 20 Chapter 2

Tools for decision problems

The problem we’ve described in the previous chapter can be treated as a classical decision analysis problem. We will explain this classical approach to decision problems. Since our main problem is way too big and complicated for illustrative purposes, we introduce here a simplified example, which we shall use in the thesis. Then the theory of decision trees will be briefly explained, and we will show how the problem presented in the simple example can be solved with the decision tree approach. In the second half of this chapter we will briefly present basics of real option valuation with few examples and discuss the pros and cons of this approach.

2.1 Small Example

Let us consider only one development and only one ‘network extension’ that can be performed to provide necessary maximum capacity. For the sake of simplicity we assume that zero time is needed for both extension and development to be built. Building the extension costs 1.9. The penalty that is issued in case development occurred and the extension has not been done is 0.91 per year. The profit from having the extension while the development have been built is 1 per year. To reflect dependence of development on general economic welfare, we introduce the annual market rate of return rm - taking values 0.25 and −0.2 with equal probabilities. (The market rate of return should not be confused with (full) market return Rm, which equals rm + 1 and takes values 1.25 and 0.8 with equal probabilities in this example.) Intuitively speaking, if market rate of return takes value 0.25 then the economy has grown 25 % this year (‘good year’); if market rate of return takes value −0.2 then the economy hasn’t been doing well this year. In Table 2.1 we show the conditional probabilities of development’s occurrence given the market behavior this year.

Market return Probability of development’s occurence 0.25 0.01 -0.2 0.005

Table 2.1: Probabilities of ‘step up’

21 Risk-free rate is rf = 0.02 in this example. We consider the two year time horizon for this simple example.

2.2 Decision Trees

Decision tree is a decision analysis support tool, that uses tree-like structure to support decision making process. There are three types of nodes in a decision tree: decision nodes, chance or uncertainty nodes and terminal nodes. There’s also a designated root node, which corresponds to ‘now’.

• Decision nodes are nodes with outgoing edges corresponding to decisions that can be made by the decision maker.

• Chance nodes are nodes with outgoing edges corresponding to possible events that can occur.

• Terminal nodes are nodes that have no outgoing edges and are usually describing utilities of scenarios given by each branch of the tree.

To describe uncertainty, conditional probabilities are assigned to edges leaving chance nodes; and the probability of given scenario is determined as the product of these condi- tional probabilities along the path from this node to the root node. An example of decision tree can be seen in Figure 2.11. There we can clearly see three types of nodes mentioned above. Decision nodes are enclosed in rectangles, chance nodes are enclosed in circles and terminal nodes are enclosed in triangles. To the scenarios described in each branch of the tree a numerical value is assigned in the terminal node. This value often corresponds to the profit the decision maker will gain if the scenario would occur. More precisely this value corresponds to the utility of this scenario. We will explain the notion of the utility function in the next subsection. To solve the decision problem described by the decision tree one has to calculate the expected utility of each scenario and the optimal decision is the one with maximum expected utility.

2.2.1 Utility Functions Utility function basically translates the numerical values (profits, losses) into the interval [0,1] taking into account the individual taste for risk. The canonical example to explain utilities is as follows: A person is given the choice between two scenarios, one with a guaranteed payoff of $50. The other scenario is the un- certain scenario, in which a person receives $100 or nothing depending on the outcome of a fair coin flip. The expected payoff for both scenarios is $50, meaning that an individual who does not take risk into account would not differentiate between these to scenarios. However, individuals may have different risk attitudes. A person is said to be:

• risk-averse (or risk-avoiding) - if he or she would accept a certain payment of less than $50 (for example, $40), rather than taking the gamble and possibly receiving nothing.

22 • risk-neutral - if he or she does not distinguish between the two scenarios.

• risk-loving (or risk-seeking) - if the guaranteed payment must be more than $ 50 (for example, $ 60) to induce him or her to take the guaranteed option, rather than taking the gamble and possibly winning $ 100.

The average payoff of the gamble, known as its expected value, is $ 50. The dollar amount that the individual would accept instead of the bet is called the certainty equiva- lent, and the difference between the expected value and the certainty equivalent is called the risk premium. For risk-averse individuals, the risk premium becomes positive, for risk-neutral persons it is zero, and for risk-loving individuals their risk premium becomes negative. To describe our preferences for risk, we choose a utility function of the payoffs. Utility of an uncertain outcome is then simply computed as expectation of the utility. Utility function for risk-averse individual is displayed in Figure 2.1. One can see that it is a concave function.

Figure 2.1: Utility function of risk-averse individual.

Utility function for risk-seeking individual is displayed in Figure 2.2. This function is convex.

Figure 2.2: Utility function of risk-seeking individual.

Utility function which is linear for risk-neutral individual is displayed in Figure 2.3. To find utility of an gamble with uncertain outcomes utilities of outcomes are com- puted and rolled back; if we have a discrete distribution of outcomes A1,...,Ak with

23 Figure 2.3: Utility function of risk-neutral individual.

k probabilities p1, . . . , pk s.t. ∀i pi ≥ 0 and ∑ pi = 1, then utility of this gamble is simply i=1 k the expected utility of the outcomes, or U = ∑ piU(Ai). i=1 The optimal strategy for rational investor according to the principle of expected utility maximization is the one maximizing the expected utility. Of course, a practical question arises how to find this utility function. The utility functions are subjective by the definition, hence to figure out this function it is necessary to ask the individual questions to determine his risk preferences. In reality, no one can say explicitly what his utility function is - thus analyst typically assumes that the utility function of the individual belongs to certain parametric family of functions. Commonly used exponential utility functions are of the form exp (−(x − MinPayoff)~ρ) − 1 u(x) = , ρ real exp (−(MaxPayoff − MinPayoff)~ρ) − 1

For example, a plot of exponential utility function with MinPayoff = 0.5,MaxPayoff = 2.0 and ρ = 0.3 is presented in Figure 2.4. This could be a utility function of some risk-averse investor.

Figure 2.4: Exponential utility function

24 In our example we shall take simple linear utility which corresponds to risk-neutral behavior. It is defined as Payoff − MinPayoff U(Payoff ) = . MaxPayoff − MinPayoff

Later on, we will introduce a new kind of utility function that is more natural in the world of so-called Mean-Variance theory.

2.2.2 Decision tree for the small example The decision tree for the small example presented in section 2.1 is shown in Figure 2.5, where nodes corresponding to market changes are omitted for the moment. The yellow-colored nodes labeled A, B, C, D, E correspond to moments when we are making decision whether to build extension or to wait. Bold edges labeled b~w correspond to our decision - b stands for ‘Build extension’ and w stands for ‘Leave it as it is’. Red nodes at the right-most of the tree are terminal nodes. The remaining (blue) nodes are uncertainty nodes, corresponding to how development may progress. Thin dashed edges labeled d~n correspond to possible changes - d stands for ‘Development is being built’ and n stands for ‘Development remains in its current state’. Cash flows are generated at the end of the first and the second years. The values of these cash flows are bold, underlined numbers above nodes B,C,D,E and to the right of terminal nodes. These cash flows are coming from construction costs, penalties and profits. For instance, the value of cash flow at node B is −0.9 = Profitperyear − Costperyear = 1 − 1.9, since 1 is the profit from customer and 1.9 is construction cost of the extension. In Figure 2.5 we omitted uncertainty nodes for market return, otherwise the tree would be too big. However these nodes cannot be omitted when doing classical decision tree analysis. To show how the approach works, we will evaluate the part of the tree starting at node E with classical approach. Notice that Figure 2.6 is precisely the part of the tree in Figure 2.5 starting from node E with included uncertainty nodes for market returns. Unlabeled (blue) nodes to the right of node E are uncertainty nodes, corresponding to how market return may evolve. Initially it takes value 1 and moves to either 1.25 or 0.8 with equal probabilities at the end of the year; this describes behavior of the ‘market rate of return’ rm from the previous section, taking values 0.25 and −0.2. Thin dashed edges connecting these nodes to nodes labeled s, t, u, v correspond to respective transitions; label p means that the market return takes value 1.25 and label m means that market return takes value 0.8. Again, cash flows are generated at the end of the year and the utilities of these cash flows are bold, underlined numbers to the right of terminal nodes. These cash flows are coming from construction costs, penalties and profits. For instance, the utility of cash −0.9−(−1.9) flow at node s1 is 0.526 = 0−(−1.9) , since −0.9 was the cash flow, maximum payoff for the part of the tree descending from node E is 0 and minimum payoff is −1.9. Of course, if we worked with whole tree seen in (Figure 2.5) we would take maximum payoff to be 1 and minimum payoff to be −1.9.

25 Figure 2.5: Shortened version of the decision tree.

We can now compte the expected utility and find optimal decision.

E(s) = 0.01 ⋅ (0.526) + 0.99 ⋅ (0) = 0.00526 E(t) = 0.005 ⋅ (0.526) + 0.995 ⋅ (0) = 0.00263 E(u) = 0.01 ⋅ (0.521) + 0.99 ⋅ (1) = 0.99521 E(v) = 0.005 ⋅ (0.521) + 0.995 ⋅ (1) = 0.9976

Hence E(Eb) = 0.5 ⋅ (0.00526) + 0.5 ⋅ (0.00263) = 0.003945 and E(Ew) = 0.5 ⋅ (0.99521) + 0.5 ⋅ (0.9976) = 0.9964 Thus we conclude that expected utility of the decision to wait is 0.9964, while expected utility of the decision to build the extension is 0.003945. Hence it’s better to wait.

26 Figure 2.6: Subtree for utilities.

2.2.3 Shortcomings The decision tree is a classical approach often used in many decision problems. Applied to valuing investments it is often criticized as a subjective approach [5]. In this context it is more convincing to believe that investors do not use their subjective utility when valuing a project but rather they relay on the ‘taste for risk’ used on the market. In typical investments it is really the ‘market value of the project’ that we aim to maximize, and thus we should adjust our risk tastes to the ‘taste for the risk’ on the market. The real option approach to valuing investments solves shortcomings of the classical decision trees in this respect.

2.3 Real Options Approach

An alternative valuation technique coming from the financial world is more centered around the idea of a present value of given sequence of cash flows. We shall first briefly recall what cash flows are, how their present value is determined and pay particular attention to the case of uncertain cash flows.

2.3.1 Idea of a Cash Flow and its Present Value Any investment can be in a sense described as a cash flow sequence - the amounts of money that will flow to and from an investor over time. These cash flows are assumed to occur at specific moments of time. Of course, these cash flows can depend on our decisions; they can be deterministic or uncertain. The idea of discounting deterministic cash flows is motivated by the trivial example: if the annual risk-free rate rf is 10% and a person receives $100 today, then he’ll get $110 tomorrow simply by investing in risk-free bonds. Thus, inversely we can say that $110 at

27 the end of the year has present value of $100 today with the risk-free rate rf = 10%. The process of finding the present value of future cash flow this way is called discounting, and rf here is the discount factor. In general, if we’re given deterministic cash flows FCF1,..., FCFn at the end of the years 1, . . . , n, then the present value of this sequence is determined as n FCF PV i . = Q i i=1 (1 + rf ) If these cash flows are certain, then the formula above is in fact a consequence of the so-called no-arbitrage argument. In general any project valuation starts with estimation of costs (costs at development phase, costs at production phase, etc.) and net revenues (cash flows) over the project life. However, these cash flows often cannot be estimated with absolute certainty: product might be not so successful as originally thought of, prices of supplies might change, etc. Thus it is crucial to be able to find present value of uncertain cash flows. Valuing the uncertain cash flows is however more complicated. A natural attempt is to find Present Value (PV) given by n (FCF ) PV E j = Q j j=1 (1 + r)

for a project that lasts n time periods, has discount rate r per time period and has (random) cash flows FCFi at the end of time period i. The problem now is in assessing this discount rate r. Speaking in terms of risk, higher discounting rate would generally indicate higher risk: investors typically require more return on investments bearing higher risks, this is again an example of risk-aversion. Naive approach suggests using risk free rate; another candidate is the ‘cost’ of capital for a company. Cost of capital represents the cost of financing a company’s activities, which is normally done through some combination of debt and equity. Debt and equity carry different costs of capital, so some form of average is needed. The weighted average cost of capital (WACC) of different cost components of issuing debt, preferred stock and common equity is WACC = WdCd(1 − t) + WpCp + WeCe where W represents the respective weights; C is the cost corresponding to debt (d), preferred stock (p) and common equity (e); t is the effective corporate tax rate. One can immediately see the weakness of this approach. Why should one use the weighted average cost of the capital to discount a new investment, that has potentially higher risk? What about distinction between the risk associated with the market, and the risk associated with some project-specific peculiarities not connected with the market? Informally speaking, in the absence of better approximations WACC can be used as a proxy to represent risk-adjusted discount rate related to project investment costs - but it is not recommended in general real option analysis. For more detailed discussions about WACC and its general inapplicability see [4], [2]. Despite this obvious complication, the idea of evaluating Net Present Value of the project remains the core methodology for valuation. Of course, one needs to find an ap- propriate discount rate. Furthermore, it’s not immediately clear how to value managerial

28 flexibility with NPV analysis. Besides, even positive NPV doesn’t necessarily mean that we should commit to the project right now - perhaps, waiting a bit can improve project value more. Still, some form of NPV analysis is always performed while doing Real Option Valu- ation as well, and the uncertain cash flows we are valuing in this situation are not just uncertain, but also in fact explicitly dependent on our decisions (which are uncertain, too - after all, we will often base our investments on the data we acquire about uncertain events).

2.3.2 Markets and Related Assumptions Before proceeding further with the theory we will fix some commonly used definitions and state the assumptions used in the theory.

Definition 2.3.1. We say that market is complete if it is one in which the complete set of possible gambles on future states-of-the-world can be constructed with existing assets. Every agent is able to exchange every good, directly or indirectly, with every other agent without transaction costs.

This assumption is a strong one, and comes from the financial markets, where it is quite natural and allows to formally prove the strongest results. Unfortunately, it might be hard to justify when dealing with real assets and related markets. In fact, even if we assume that such complete set of possible gambles exists, we don’t know in advance what this set might be for our project and how much it might cost.

Definition 2.3.2. We say that market is arbitrage-free, if there’s no opportunity for profitable arbitrage.

That is, there’s no opportunity to buy something for one price and immediately sell it for a higher price without taking any risk. The two definitions above - which are themselves assumptions about the market we’re dealing with - are extremely important and lie at the foundation of many classical analytic tools like NPV calculation and Real Option Valuation. The following commonly known ‘Law’ is actually a consequence of the two assumptions above.

Definition 2.3.3. Law of one price says that in arbitrage-free markets two assets that have exactly the same payouts in all possible states of the world must have exactly the same price (or value).

One of the concepts broadly used in Real Option Analysis is the idea of a replicating portfolio.

Definition 2.3.4. Replicating portfolio for the underlying risky asset is a set of securities that replicates the payoffs of the underlying asset in every state of the world.

In complete markets replicating portfolio always exists - think of making bets on the future states of the underlying assets for instance; finding it, on the other hand, might be difficult for real assets.

29 As we have already said, at the foundation of Real Option Valuation techniques are assumptions of market completeness and lack of arbitrage. These assumptions ‘implic- itly’ appear in typical step of creating a replicating portfolio - in the absence of market completeness it’s not guaranteed that replicating portfolio would even exist, and in the absence of no-arbitrage assumption we cannot say that the value of our option is the present value of replicating portfolio, even if we have found one. It turns out that proper Net Present Value calculation with Discounted Cash Flow analysis also relies on these assumptions.

2.3.3 Capital Asset Pricing Model We have discussed already that it is very challenging to find the proper discounting rate for investments with uncertain outcomes. This issue is not limited, of course, to investments in electrical networks, rather it is a problem for general investment in capital assets. In this section we’re going to describe the Capital Asset Pricing Model (CAPM) that provides a proper discounting rate, taking into account relation with the market. This section is mostly based on corresponding paragraph of [1]; the fundamentals of CAPM can also be found in the original papers [8] and [7]. The CAPM is based upon Mean-Variance portfolio theory, which was originally for- mulated by Markowitz. It assumes existence of a number of investment opportunities, with returns that are random variables possibly correlated with each other. Investment opportunities can be combined in portfolios. So suppose there are n assets on the market with certain initial prices Y1,...,Yn and uncertain future prices W1,...,Wn. Then the rate of return on asset i is a random variable Wi ri = − 1. Yi The expectation of rate of return on asset i will be denoted by ri, the standard deviation will be denoted by σi. Covariance between rates of return on assets i and j will be denoted by σi,j. Thus every asset i with expected rate of return ri and its standard deviation σi cor- responds to the point (ri, σi) in the Mean-Standard Deviation plane; (of course, multiple assets might correspond to the same point). We can also combine assets in portfolios by choosing real weights w1, . . . , wn such that n ∑ wi = 1. If weight wi is positive, then we invest a fraction wi of our total wealth in asset i=1 i, if weight wi is negative we short sell asset i. Short selling is the practice of borrowing an asset, selling it for the initial price and then buying for the new price so as to return it back; in reality short-selling is not always allowed. n It can be shown that the rate of return on the portfolio is ∑ wiri with expectation i=1 n n ∑ wiri and variance ∑ wiwjσi,j. i=1 i,j=1 Since assets can be combined in portfolios, it makes sense to consider the region in the r − σ plane given by all possible portfolios. This region is called feasible region; the most important properties of the feasible region are • being solid - that is, this planar region has no ‘holes’,

30 • being left-convex - that is, given any two assets in the feasible region, the straight segment connecting them does not cross the left boundary of the feasible region.

An example of what feasible region can possibly look like is in Figure 2.7, no short- selling is allowed in this picture.

Figure 2.7: Feasible region in the Mean-σ plane, no short-selling.

Any investor has a utility for uncertain return, namely if ri is the uncertain rate of return on asset i with expectation Eri = ri and standard deviation σ(ri) = σi, then we assume that the utility is U(i) = f(ri, σi). Thus it follows that investors do not distinguish between assets with the same mean and variance of return, and there may be many assets with the same utility for the investor but different combinations of mean and variance of returns. We will call the curves consisting of points in the r − σ plane that have the same utility to the investor indifference curves. We stress here that this utility function is different from the utilities discussed in the section 2.2.1 about classical decision analysis tools. However, it might be possible to express certain forms of risk preference using both of them. For instance, risk-neutral investor would have linear utility function as seen in 2.3, and it is also possible to choose function f independent of the second variable σi. Then indifference curves in r − σ plane for this investor would look like horizontal dashed lines in Figure 2.8, where (typically) U1 > U2 > U3. Naturally, different investors might have different utility functions f. However, in- vestors on the market do exhibit two important properties

• Risk-Aversion Risk-aversion here means that given two investment opportunities with the same expected rate of return and different standard deviations of the return, investor

31 Figure 2.8: Indifference curves for risk-neutral investor in the Mean-σ plane.

would choose the investment with lower standard deviation of the return. That is, investor tries to avoid risk.

• Nonsation Nonsation here means that given two investment opportunities with the same stan- dard deviation of return and different expectations of the return, investor would choose the investment with higher expectation of the return. That is, investor chooses ‘more over less’.

It makes sense to have a look at how indifference curves in r − σ plane may look like for such investors. In Figure 2.9 we can see the feasible region with indifference curves, which are dashed lines labeled U1, U2, U3; in the picture we have U1 > U2 > U3. Points x, y, z in the figure correspond to some portfolios; in the picture x is better than z because of nonsation and y is better that z because of risk aversion. Since all investors are risk-averse and prefer ‘more to less’, we don’t need to con- sider all feasible set when seeking for optimal investments. Under these assumptions about investor utilities there is so-called efficient frontier, which is precisely the upper- left boundary of the feasible region. In Figure 2.10 it is given by bold red line. Portfolios in this set are called efficient. For instance, portfolios x, y in Figure 2.9 are efficient while portfolio z is not. Given these important assumptions two important theorems can be proved, which serve as a basis for the pricing form of the model. First of all, an important feature of the efficient frontier is that it is a simple plane curve, and thus can be easily ‘parameterized’ by choosing two points in the frontier and taking all portfolios (with short-selling allowed) of these two assets. This is the main idea of the proof of the theorem below:

32 Figure 2.9: Feasible region and indifference curves in the Mean-σ plane.

Theorem 2.3.1 (Two-fund theorem). Two efficient funds or portfolios can be established such that any efficient portfolio can be duplicated in terms of mean and variance as a combination of these two. In other words, all investors seeking efficient portfolios need only invest in a combination of these two.

Note that these two funds can be found to be the same for all investors on the market. However, different investors will likely choose different combinations of these two portfolios according to their tastes for risk. If we have a risk-free bond - for instance, when we can borrow and lend from/to the bank at the risk-free rate - then we obtain another theorem, which is

Theorem 2.3.2 (One-fund theorem). There is a single fund F of risky assets such that any efficient portfolio can be constructed as a combination of the fund F and the risk-free asset.

These two theorems have important consequences, namely the way to price assets using CAPM:

Theorem 2.3.3 (Pricing formula). If the market portfolio M is efficient, then the ex- pected rate of return ri on the asset i equals

ri = rf + βi(rM − rf )

where Cov(ri, rM ) βi = Var(rM )

33 Figure 2.10: Feasible region and efficient frontier in the Mean-σ plane

In this formula the term βi(rM − rf ) is called risk premium; it is motivated by the fact that this term corresponds to additional expected return for taking some risk. βi is also an important characteristic of the asset, the values of beta’s for common stocks on the market are typically available. Notice that if Cov(ri, rM ) = 0, hence when return on the investment is independent from the market - then CAPM tells us that the expected rate of return equals risk-free rate. So, even if the risk on the asset is very high, then there’s still no risk premium. The intuitive reason for that is the fact that if we take large number of such assets, each uncorrelated with one another and with the market, then the resulting variance of the portfolio would become small. For example, if all these assets have the same variance of return σ2 and we take equally-weighted portfolio of them, then the variance of the n n 1 1 2 j 1 2 return of this portfolio would be ∑ wiwjσi,j = ∑ n n σ δi = n σ and thus gets small i,j=1 i,j=1 when number of such assets n is large. In particular, if the return on the investment is certain - that is, we’re considering certain cash flows - then CAPM gives rf as the proper discounting rate, which agrees with the conventional practice of discounting certain cash flows at the risk-free rate. In case when Cov(ri, rM ) < 0, then we get that ri < rf , so that expected rate of return on the asset is even smaller then the risk-free rate. How can such asset be of value? The answer is that such asset can reduce the resulting variance of a portfolio (since it has negative correlation with the market). In our project we will use the certainty-equivalent form of CAPM that suggests valuing the future cash flows by

(FCF ) − λ ⋅ Cov(r , FCF ) PV (FCF ) = E m 1 + rf

34 with λ - ‘market price of the risk’ - given by

E(rm) − rf λ = Var(rm)

We postpone the discussions about CAPM for a while to have a closer look at the real options.

2.3.4 Classical Way of Real Option Valuation First of all, let’s recall that a common definition (see e.g. [2]) of a real option is: Definition 2.3.5. A real option is the right, but not the obligation, to take an action (e.g. deferring, expanding, contracting or abandoning) at a predetermined cost called exercise price, for a predetermined period of time - the life of the option.

There is a number of ways to do Real Option Valuation. Two common approaches are

• Black-Scholes approach;

• Replicating portfolio approach;

Black-Scholes approach relies on the classical Black-Scholes formulae for the value of the European call option and its variations. Its key advantage is its simplicity - it gives analytic answer given the necessary parameters (volatility, life of the option); the disadvantages for general-purpose Real Option Analysis are numerous and we are not going to discuss them here. Replicating portfolio approach relies on the fact that market completeness implies that payoffs can be replicated with some combination of securities, while law of one price then states that the present value of the investment equals the current price of the replicating portfolio. Let us briefly see how the real option analysis with replicating portfolio is performed for a ‘textbook’ example:

2.3.5 Classical Replicating Portfolio Example You have to decide right now if to make an investment of $115 into the project next year, which will produce uncertain outflows of $170 and $65 with 50-50 chance. The risk-free rate rf is 8%. Alternatively, you can wait till the end of the year when the uncertainty is resolved. Suppose we have found a twin security that has a market price of $20 per share now and its payoffs at the end of the year are $34 and $13 in up and down states respectively, see Table 2.2. Assuming Law of One Price, let us create a replicating portfolio consisting of m shares of the twin security we’ve found above and B (risk-free) bonds to replicate payouts of our project:

Replicating portfolio in the up state ∶ m ⋅ ($34) + (1 + rf )B = 170

35 Figure 2.11: Decision tree

Project to be valued Twin Security Cash in up state 170 34 Cash in down state 65 13

Table 2.2: Cash flows of the project and of the twin security

Replicating portfolio in the up state ∶ m ⋅ ($13) + (1 + rf )B = 65 Doing simple calculation, we get that m = 5 and B = 0. Thus by the Law of One Price the Present Value of the project is equal to the present value of the replicating portfolio and so PV = 5 ⋅ ($20) + (1 + rf ) ⋅ $0 = $100 hence $115 NPV = $100 − = −$6.48 1 + 0.08 To conform with the Law of One Price, some form of replicating portfolio reasoning for valuation of the deferral option is also needed. Recall that we have already found the twin security, let’s use it to create a proper replicating portfolio, reflecting payoffs from decision deferral (last column of Table 2.2).

Replicating portfolio in the up state ∶ m ⋅ ($34) + (1 + rf )B = 55

Replicating portfolio in the up state ∶ m ⋅ ($13) + (1 + rf )B = 0 Solving these equations for m, B gives us that m = 2.62 and B = −$31.53. This means that a portfolio, consisting of 2.62 shares of the twin security and a debt of $31.53 bonds, replicates the payouts of the project with option. Then by the Law of One Price the

36 Present Value of the project with the flexibility to defer equals the Present Value of the replicating portfolio, so

NPV = 2.62 ⋅ $20 − $31.53 = $20.87 and the value of the deferral option itself is

C0 = $20.87 − (−$6.48) = $27.35

So we conclude that using options adds $27.35 to the value of the project. Informally speaking, this is the main reason for using real options - they help to value the project’s flexibility.

2.3.6 Real Options for the simplified example We have just seen how replicating portfolio approach to valuing real options can be applied for a ‘textbook’ example. In this section we shall investigate how one can apply the replicating portfolio ap- proach to value a more interesting ‘option’, namely a part of the small example with one development and one extension. In Figure 2.12 a ‘decision tree’ for node E is depicted. Note that the difference from Figure 2.6 is that we have cash flows and not utilities to the right of terminal nodes.

Figure 2.12: Tree For Replicating Portfolio Example

37 We will show how replicating portfolio approach can be used to value this part of the project and find optimal strategy. In general, the idea of finding replicating portfolio naturally relies on the law of one price - when such replicating portfolio is found, law of one price guarantees that the present value of our project is just the current price of replicating portfolio. In classical ‘replicating portfolio’ approach we find a combination of market-traded asset and a risk-free bond that is perfectly correlated with the asset we want to value. In our problem, however, overall market is not perfectly correlated with the asset and conditional on the market return there’s still some uncertainty in the behavior of the development. The natural extension would be to replicate conditional expectation of the asset value, given level of market return.

E(s) = 0.01 ⋅ (−0.9) + 0.99 ⋅ (−1.9) = −1.89 E(t) = 0.005 ⋅ (−0.9) + 0.995 ⋅ (−1.9) = −1.895 E(s) = 0.01 ⋅ (−0.91) + 0.99 ⋅ (0) = −0.0091 E(t) = 0.005 ⋅ (−0.91) + 0.995 ⋅ (0) = −0.00455

Recall that risk-free rate rf is 0.02. Now

Replicating portfolio in the up state ∶ −1.89 = 1.25 ⋅ B1 + 1.02 ⋅ m1

Replicating portfolio in the down state ∶ −1.895 = 0.8 ⋅ B1 + 1.02 ⋅ m1

Solving this gives B1 = 0.0111, m1 = −1.8665, hence the value is B1 + m1 = −1.8554466. Now

Replicating portfolio in the up state ∶ −0.0091 = 1.25 ⋅ B2 + 1.02 ⋅ m2

Replicating portfolio in the down state ∶ −0.00455 = 0.8 ⋅ B2 + 1.02 ⋅ m2

Solving this gives B2 = −0.010111, m2 = 0.0034695, hence the value is B2 +m2 = −0.006641. This means that it is better to wait.

2.3.7 Pros and Cons of Real option analysis The approach works very well for small examples. It gives us possibility to value flexibility of a project and allows to use information about risk aversion present on the market. However our original problem is much bigger. Indeed we want to find the value of the project lasting over 15 years. Storing all necessary information: decision nodes, uncertainty nodes for market return, development uncertainty nodes would not be possible for such a long time period. We will address this issue in the next chapter after describing our problem in a slightly more formal manner.

38 Chapter 3

Formal Description of the Problem and the Suggested Approach

As we have seen in the previous chapters, classical tools are not immediately applicable to our problem, so we had to design an algorithm that would • comply with the aim of valuing real options emerging in the project in a ‘sensible’, market-motivated way, • be able to handle over 15 years of duration. In this section we will formally describe the problem and suggest an approach to solving it. First we concentrate on modeling the AEX index which is going to be a main driver of the likelihood of developments in our model. It will be assumed that the likelihood depends on values of annual market returns of two consecutive years. This relationship will be then described. Afterwards we will follow with formal description of the problem and present how this problem will be solved.

3.1 Market Return and the AEX Index

In Chapter 2, section 2.3.3 we already mentioned that our approach will rely on the certainty-equivalent form of the CAPM as the pricing formula. Hence we require the distribution of ‘market rate of return’ which can be calculated from the AEX index, AEX(t+∆) namely taking rM (t) = AEX(t) − 1. All together, this gives sufficient motivation to have a closer look at the AEX index and approaches to modeling it. Perhaps the oldest model for that is Geometric Brownian Motion, which in general can be used to model the price of the underlying assets, in particular stocks. Let us briefly recall what Geometric Brownian motion is. If {X(t)}t≥0 is standard Brownian motion, then the random process {Y (t)}t≥0 defined by

Y (t) ∶= Y0 exp (aX(t) + bt), t ≥ 0 is called geometric Brownian motion.

39 If the stock price follows geometric Brownian motion, then the returns over disjoint time intervals are independent. Indeed, Y (t) = exp (a(X(t) − X(t − ∆)) + b) Y (t − ∆)

Y (t − ∆) = exp (a(X(t − ∆) − X(t − 2∆)) + b) Y (t − 2∆) and X(t) − X(t − ∆) is independent from X(t − ∆) − X(t − 2∆). Furthermore, it is easy to see that the natural logarithms of returns are normally distributed; similarly, natural logarithms of returns over disjoint time intervals are inde- pendent normal random variables. It is not a scope of this thesis to concentrate on modeling the AEX index, we only want to have an approximation of annual market returns. For that we assumed that the logarithms of annual returns are independent identically normally distributed. To test this assumption we looked at the monthly data for AEX for the past 30 years (see Figure 3.2). AEX(t) The annual returns, which are obtained at year t as the fraction AEX(t−12) , can be easily computed from this AEX data and are displayed in (Figure 3.3). We have performed a few normality tests, the results are summarized in 3.1. It can be seen that all the tests accept the hypothesis that logarithms of annual returns are a sample from normal distribution. Independence hypothesis was tested and not rejected using ttest2 Matlab routine.

Test Result Implementation Kolmogorov-Smirnov Accepts Matlab, kstest Shapiro-Wilk Accepts Matlab, [10] t-test Accepts Matlab, ttest Jarque-Bera Accepts Matlab, jbtest Lilliefors Accepts Matlab, lillietest z-test Accepts Matlab, ztest

Table 3.1: Summary of normality tests for annual log-returns.

All the implementations are default Matlab implementations, except for Shapiro-Wilk test, which is taken from [10]. The Q-Q plot of the data is shown in Figure 3.1. Thus given the data of approximately 30 years of AEX we cannot reject the hypothesis that the logarithms of yearly returns come form an i.i.d. sequence of normal random variables with means and variances estimated from the data. For our purposes we need a discrete approximation to the distribution of market returns. In order to get an discrete approximation to the distribution of the annual returns it suffices to produce a sufficiently nice discrete approximation to the normal distribution and ‘take the exponent’. For the sake of simplicity we assume for now that the annual return R is a binomial random variable, taking values p and q with equal probabilities. Values p and q are computed from the expectation and standard deviation of the data, displayed in Figure 3.3.

40 Figure 3.1: Q-Q plot for annual log-returns.

The mean of annual returns obtained from the AEX data equals 1.0939 and their standard deviation is 0.2704. From this data we obtained a binomial random variable R s.t. ER = 1.0939 and σ(R) = 0.2704, the approximation to rate of return is thus r ∶= R − 1 and is displayed in Figure 3.4. This is in sense similar to the so-called ‘binomial lattices’ that are used for modeling the price of the underlying asset in option valuation (see [13],[2]). Alternatively, we also implemented a Monte-Carlo simulation for the market return, given that it follows log-normal distribution with parameters estimated from the data. More details will be provided in the section 4.1. However, in practice GBM is not the best model for AEX, particularly when we’re interested in modeling more frequent changes of the index (which is important for stock market). One of the drawbacks is that GBM suggests the constant volatility of the index, whereas in reality the (historical/implied) volatility is not constant. For more information about AEX behavior and models for it see [12].

3.2 Conditional probabilities of developments given his- tory of AEX

Suppose we’re currently at the moment of time t, we know AEX (t), AEX (t−1), AEX (t− 2) and we want to know what is the conditional probability of a development, given the recent history of AEX index and our current network. Our developments happen in steps so we want to know the conditional probability of ‘step up’ at time t for the development we’re considering.

41 Figure 3.2: AEX history.

We assume the following:

P(STEP UPS our network at t & step up can occur at t & AEX (s), s ≤ t) = AEX (t − 1) AEX (t) = U( , ) AEX (t − 2) AEX (t − 1)

Let us explain the formula a bit and make some assumptions about the function U. • The formula tells us that the conditional probability of the ‘step up’ for the de- velopment given that this ‘step up’ can occur (that is, development isn’t already 100% complete) AND the history of AEX until now AND our current network con- figuration is the function U(⋅, ⋅) of the market return over time period [t − 2, t − 1] AEX (t−1) AEX (t) AEX (t−2) and of the market return AEX (t−1) over time period [t − 1, t]. • If the development is already full then the conditional probability of ‘step up’ is 0.

• A bivariate function U(⋅, ⋅) becomes increasing univariate function if we fix any of the variables and let the other one vary (so U(x0, ⋅) and U(⋅, y0) are increasing functions). This comes from the assumption that developments are more likely to be developed if the economy is doing well.

AEX (s) • We assume that the worst possible value of AEX (s−1) is 0.33, and best possible is 3.0. These are fairly reasonable bounds if we want to stay on the safe side, since an annual return close to 2 was already observed. This means that the conditional transition probability is the function U of market returns in two preceding years. The function we choose has to satisfy some reasonable assumptions. We first discuss only one dimensional function V of annual market return. It is reasonable to assume that the annual return cannot be bigger than 3 and smaller than 1/3, just to stay on the safe side (recall Figure 3.3 with historical data of annual returns).

42 Figure 3.3: Annual returns.

Figure 3.4: Binomial approximation to annual rate of return.

This assumption gives us a certain boundary to the domain of the definition of this function. Moreover, this function should be non-decreasing. In case of no growth (t = 1) the function will take a certain value, say b, and if t < 1 (t > 1) it is reasonable to assume that V is convex (concave). One of possible functions that would satisfy the above constraints is:

π a (1 − b) sin( 2 (log3 x) ) + b for 1 < x < 3 Va,b(x) = œ π a 1 b sin(− 2 (− log3 x) ) + b for 3 < x < 1

Parameter a here controls the shape of the functions, and parameter b determines the value at t = 1. The Figures 3.5,3.6 show the plots of V for different values of parameters. For our problem we need a bivariate function U which margins will have similar properties as V . Moreover, we assume that the value of market return over time interval [t−2, t−1] is ‘p times more influential’ for the value of the transition probability than the value of market return over time interval [t−1, t]. The following function is the expansion

43 Figure 3.5: V0.6,0.4

Figure 3.6: V0.6,0.5

of V to bivariate case that has required properties. » 2 2 Ua,b,p(t, s) = Va,b( pt + (1 − p)s )

Parameter a here controls the shape of the function, parameter b determines the value at t = 1, s = 1 and parameter p determines ‘how much influence the value of the growth in the first year has as compared to the growth in the second’. The Figures 3.7, 3.8 show the plots of U for different values of parameters. To find the parameters of the Ua,b,p function we asked questions about the values of Ua,b,p at different points. For each development we asked 3 questions: one about value of Ua,b,p at point (1, 1) (which corresponds to no growth in both years), one about value at (1.1, 1.1) (hence the conditional probability of the development in case of growth in both years), and one about value at (1, 1.1). The questions were formulated as follows:

• What is the probability that X is developed if there’s no AEX growth in the first year and no AEX growth in the second year?

• What is the probability that X is developed if there’s 10% AEX growth in the first year and 10% AEX growth in the second year?

44 Figure 3.7: U0.7,0.5,0.1

Figure 3.8: U0.7,0.5,0.5

• What is the probability that X is developed if there’s no AEX growth in the first year and 10% AEX growth in the second year?

Given answers to these questions the parameters a, b, p can be found. In Table 3.2 we present the answers to the questions and the corresponding parameters.

3.3 Developments

Below we briefly describe the developments that can occur in the area of Harlem- mermeer and formalize assumptions about random processes involved. For simplicity we assume that transitions take 0 time, developments cannot be reversed and cannot move ‘2 steps up’ instantly. The numbers in state spaces for every process below are precisely the capacity required by the corresponding development. Initially all processes associated with the developments take value 0, which reflects the fact that there’re no developments at the moment. Furthermore, developments can change their state only every 2 years.

• Houses

Random process PH (t) with values in ST1 = {0, 5, 10}.

45 Development Q1 Q2 Q3 a b p Houses 0.67 0.95 0.67 0.178234 0.67 1 Offices 0.25 0.99 0.5 0.044954476 0.25 0.999999999999982 Greenhouses 0.85 0.99 0.85 0.108920 0.85 1 Wind farm 0.99 1.0 1.0 0.0000000001 0.99 0.0000000001 Data center 0.85 0.99 0.85 0.108920 0.85 1 Industr. logistics 0.25 0.99 0.5 0.044954476 0.25 0.999999999999982

Table 3.2: Answers and estimated parameters

• Offices

Random process PO(t) with values in ST2 = {0, 3, 6}.

• Data center

Random process PD(t) with values in ST3 = {0, 75}

• Greenhouses

Random process PG(t) with values in ST4 = {0, 60, 120}.

• Wind park

Random process PW (t) with values in ST5 = {0, 15}

• Logistics area

Random process PA(t) with values in ST6 = {0, 15}

Let us define the development process to be

D(t) ∶= (PH (t), PO(t), PD(t), PG(t), PW (t), PA(t)) with values in state space S1 ∶= ST1 × ST2 × ST3 × ST4 × ST5 × ST6.

3.4 Extensions, Investments

As discussed earlier development of houses and investments for these houses are quite independent on investments in the area of RSH. This allows to solve the problem for houses separately from other developments. The state of the network is a random process I(t), with values of the form (v1, v2) for v1 ∈ {S0,S1,S2,S3,S4} and v2 ∈ {H0,H1}. The values of v1 and v2 are presented in Table 3.3 and Table 3.4,respectively. The investments can be build in stages described by the transition diagram given in Figure 3.9. Durations and costs of the transitions are specified in Table 3.5. Note that to simplify the transition diagram we’ve omitted ’do not move’ transitions, which have zero cost and duration. Transition to state S0 is a special one - it is not affected by our decisions; once the network ends up in state S4 while capacity in RSH is insufficient, the

46 State Network description Cap. in RSH, MVA S1 2x80 MVA transformers, expan- 62 sion gear from 150kV network; 20kV station RSH, 24 km 3x20 MVA cables HMM-RSH; 12 km 1x20kV-ring S2 2x80 MVA transformers, expan- 82 sion gear from 150kV network; 20kV station RSH, 24 km 3x20 MVA cables HMM-RSH; 14 km 2x20kV-ring S3 2x80 MVA transformers, expan- 102 sion gear from 150kV network; 20kV station RSH, 32 km 4x20 MVA cables HMM-RSH; 16 km 3x20kV-ring S4 2x80 MVA transformers, expan- 122 sion gear from 150kV network; 20kV station RSH, 40 km 5x20 MVA cables HMM-RSH; 22 km 4x20kV-ring S0 EMERGENCY PLAN 242

Table 3.3: Description of the first state variable

State Network description Cap. in housing area, MVA H0 - 0 H1 12 km 20 kV ring 20

Table 3.4: Description of the second state variable. transition to S0 commences automatically (hence we do not value an option of triggering this emergency mode). Next, let’s consider the second state variable v1 ∈ {H0,H1}. The states are specified in Table 3.4, the transition diagram between them is given in Figure 3.10. Durations and costs of the transitions are specified in Table 3.6. Note that to simplify the transition diagram we’ve omitted ’do not move’ transitions, which have zero cost and duration. So the state space of I(t) is {S0,S1,S2,S3,S4} × {H0,H1}. Furthermore, we can make investments (that is, build extensions) every 2 years or every 4 years. After formalizing the assumptions and available information we will now present the way we plan to solve the decision problem of Liander.

47 Figure 3.9: Transition diagram for 1st state variable.

Figure 3.10: Transition diagram for 2nd state variable.

3.5 Suggested Approach

The suggested approach is itself a dynamic programming approach, relying on single- period Capital Asset Pricing Model. For the moment, developments are described by the random process {D(t)}t∈N with values in state space S1. Our investments are described by random process {I(t)}t∈N with values in state space S2. At any moment of time t ∈ N we observe the state of developments D(t−1) together with the market situation and can make some investments into our network. We will always try to maximize our ‘profit’ in a certain sense and benefit from learning, thus we want to value an option, rather then a static investment strategy. At time t, cash flow F (t) is generated, and its value is determined by the state of developments D(t) and our investments I(t). More precisely, it is comprised by profits from customers and penalties in case we fail to comply with our obligations. Define random process {Z(t)}t ∶= {(D(t),I(t))}t with values in state space S1 × S2;

48 Transition Cost, M EU Time, years T1 0.6 1 T2 2.6 1 T3 4.9 1 T4 2 1 T5 2.3 1 T6 4.3 1 T0 30 0

Table 3.5: Possible transitions for 1st state variable with their costs and time. Transition Cost, M EU Time, years U0 2 1

Table 3.6: Possible transitions for 2nd state variable with their costs and time.

hence we conclude that F (t) = G(Z(t)) for some function G. We will denote the possible states of processes {D(t)}t, {I(t)}t, {Z(t)}t at time t by ran(D(t)), ran(I(t)), ran(Z(t)), respectively. We assume that the project lasts T ∈ N time periods. To solve the decision problem we will: 1. Build decision tree describing all possible developments and our investments. At time slice t ∈ N, the set of nodes is given by all possible states of the process Z(t). Edges connecting nodes of time slice t − 1 to nodes of time slice t correspond to possible state transitions of the process {Z(t)} between ran(Z(t − 1)) and ran(Z(t)). 2. Find Present Value recursively using CAPM. (a) It’s clear that one step before the end of the project we would simply want to make a decision about I(T ) that maximizes the present value of cash flow F (T ) at time T plus F (T − 1), that is

PVT −1 = max {F (T − 1) + PV[F (T )SI(T ) = z]} z∈ran(I(T ))

Value-maximization of investment process now implies that

I(T ) = argmaxz∈ran(I(T )){F (T − 1) + PV[F (T )SI(T ) = z]}

Here [F (T )SI(T ) = z] is the conditional distribution of the cash flow F (T ) at time T given that our investment process I(⋅) is in the state z at time T . PV stands for the present value, derived through CAPM. Observe now that PVT −1 is a function of Z(T − 1), hence a random ‘cash flow’ itself. (b) Similarly,

PVT −2 = max {F (T − 2) + PV[PVT −1SI(T − 1) = z]} z∈ran(I(T −1))

Value-maximization now implies that

I(T − 1) = argmaxz∈ran(I(T −1)){F (T − 2) + PV[PVT −1SI(T − 1) = z]}

49 (c) In general,

PVT −k = max {F (T − k) + PV[PVT −k+1SI(T − k + 1) = z]} z∈ran(I(T −k+1))

Value-maximization now implies that

I(T − k + 1) = argmaxz∈ran(I(T −k+1)){F (T − k) + PV[PVT −k+1SI(T − k + 1) = z]}

3. Note that this optimization problem corresponds to rolling back the tree. Let us elaborate a bit more about the way we compute present values through CAPM model. In all our computations we assume that we have a discrete approximation to the distribution of market rate of return over the time period, namely rd is a discrete random variable taking values r1, . . . , rk with probabilities p1, . . . , pk. Now suppose we want to compute PV[F (T )SI(T ) = z] with the following form of CAPM that suggests valuing the future cash flows by

(FCF ) − λ ⋅ Cov(r , FCF ) PV (FCF ) = E d 1 + rf

with λ - ‘market price of the risk’ - given by

E(rd) − rf λ = Var(rd) . The crucial observation now is that given our assumptions about AEX and the way we get the distribution of market rates of return, these rates of market return over disjoint time intervals form an i.i.d. sequence of random variables. Hence, when we want to find covariances and expectations at some node in the tree, we don’t need to know the history of the index. We simply plug in the conditional distribution [F (T )SI(T ) = z] into the pricing for- mula and get

([F (T )SI(T ) = z]) − λ ⋅ Cov(r , [F (T )SI(T ) = z]) PV ([F (T )SI(T ) = z]) = E d 1 + rf

Let’s compute E([F (T )SI(T ) = z]) first. We have

k E([F (T )SI(T ) = z]) = Q piE[F (T )SI(T ) = z, rd = ri] = i=1 k lz = Q Q piρi,j;zF (T )(j, z) i=1 j=1

Indeed, since all state spaces in our problem are finite, the conditional distribution [F (T )SI(T ) = z, rd = ri] is a discrete random variable, so it takes values F (T )(1, z), ... ,

F (T )(lz, z) with probabilities ρi,1;z, . . . , ρi,lz;z. In the notation for these probabilities we state explicitly that they do depend on the market return.

50 Similarly, to compute the Cov(rd, [F (T )SI(T ) = z]) we write

k E(rd ⋅ [F (T )SI(T ) = z]) = Q piriE[F (T )SI(T ) = z, rd = ri] = i=1 k lz = Q Q piρi,j;zriF (T )(j, z) i=1 j=1

We can compute other Present Values in a similar manner. To have a better feeling for the conditional distributions appearing here, one can have a look at Figure 3.11. The root square node in the picture corresponds to decision node, we can make decisions z1 and z2 that are marked with bold black arrows. Dashed lines leaving green nodes correspond to uncertain change of market rate of return, it can take value r1 with probability p1 and r2 with probability p2 (and of course behavior of market return does not depend on our investment decision). Finally, thin black arrows leading to right-most triangular nodes correspond to changes of the development in the area, conditional on the behavior of market return this year. The conditional distribution [F (T )SI(T ) = z1] in this case is the discrete distribution, taking value F (1, z1) with probability p1 ⋅ ρ(1, 1; z1) + p2 ⋅ ρ(2, 1; z1) and value F (2, z1) with probability p1 ⋅ ρ(1, 2; z1) + p2 ⋅ ρ(2, 2; z1). Since the distribution of market rate of return remains the same given our assump- tions, it doesn’t make sense to store it in the tree. It might also be interesting to realize at this point something we’ve mentioned be- fore, namely that in the theory so far there’s no distinction between a sequence of ran- dom future cash flows coming from some dynamic value-maximizing strategy (as the one provided by the optimization procedure above) which we call ‘an option’ and a simple sequence of random future cash flows, typical for ‘project without options’. We have already seen in the workout of the ‘textbook’ real option example that the correct derivation of Net Present Value of the project shares many similarities with Real Option Valuation. Now it works ‘the other way round’, namely we use CAPM - a standard tool for finding Net Present Value with the proper discounting rate - to value cash flows coming from an option. Perhaps one can say that the whole distinction between these two concepts is artificial, even though Net Present Value might seem deceivingly simple while in Real Option valuation one is forced to think carefully about proper risk-adjusted discounting rate. This point of view is also discussed with greater care and detail in [3].

3.6 Dynamic Programming Valuation for the small ex- ample

Now we’ll show how the dynamic programming approach outlined in the previous section works for the small example. Again, we represent the data in form of a tree, see (Figure 3.12). We stress that there’re no uncertainty nodes for the market return now in the tree. Time step T − 1: (1) Node B.

51 Figure 3.11: Conditional distributions and market return.

Here we have only one possible strategy since the extension is already built, and the development also cannot change it’s state. Hence the future cash flow 3 is certain and thus uncorrelated with the market returns, and we can simply value it using NPV formula: 1 PV (Bw ) = −0.9 + = 0.080392 1 + rf

So PV (B) = 0.080392 (2) Node C. Here we have only one possible strategy since the extension is already built, but how development will behave is uncertain. Thus we have to use CAPM to value it.

ECw (FCF ) − λ ⋅ CovCw (rm, FCF ) PV (Cw ) = −1 + 1 + rf

with λ - ‘market price of the risk’ - given by

E(rm) − rf λ = Var(rm)

52 Figure 3.12: Tree Structure for Dynamic Programming

Here

ECw (FCF ) = 0.5 ⋅ 0.01 ⋅ 1 + 0.5 ⋅ 0.99 ⋅ 0 + 0.5 ⋅ 0.005 ⋅ 1 + 0.5 ⋅ 0.995 ⋅ 0 = 0.0075 E(rm) = 0.5 ⋅ 0.25 + 0.5 ⋅ (−0.2) = 0.025, Var(rm) = 0.0506, λ = 0.09881

CovCw (rm, FCF ) = E(rm ⋅ FCF ) − E(rm)E(FCF ) = (0.5 ⋅ 0.01 ⋅ 1 ⋅ 0.25+ + 0.5 ⋅ 0.99 ⋅ 0.25 ⋅ 0 + 0.5 ⋅ 0.005 ⋅ 1 ⋅ (−0.2) + 0.5 ⋅ 0.995 ⋅ 0 ⋅ (−0.2))− − 0.025 ⋅ 0.0075 = 0.0005625

0.0075−0.09881⋅0.0005625 Hence we conclude that PV (Cw ) = −1.9+ 1.02 = −1.8927, and PV (C ) = −1.8927 (3) Node D In this case the cash flows from the development are certain for every strategy, hence we can find present value simply discounting at the risk-rate. It’s also clear that we should choose strategy Db, that is, to built the extension. Then PV (D) = PV (Db) = −0.9 −0.91 + 1+0.02 = −1.79235 (4) Node E Here we have both flexibility of developments and of our decisions. Let’s value strategy

53 Eb first. Here

EEb (FCF ) = 0.5 ⋅ 0.01 ⋅ (−0.9) + 0.5 ⋅ 0.99 ⋅ (−1.9) + 0.5 ⋅ 0.005 ⋅ (−0.9)+ + 0.5 ⋅ 0.995 ⋅ (−1.9) = −1.8925

E(rm) = 0.5 ⋅ 0.25 + 0.5 ⋅ (−0.2) = 0.025, Var(rm) = 0.0506, λ = 0.09881

CovEb (rm, FCF ) = E(rm ⋅ FCF ) − E(rm)E(FCF ) = (0.5 ⋅ 0.01 ⋅ (−0.9) ⋅ 0.25+ + 0.5 ⋅ 0.99 ⋅ (−1.9) ⋅ 0.25 + 0.5 ⋅ 0.005 ⋅ (−0.9) ⋅ (−0.2)+ + 0.5 ⋅ 0.99 ⋅ (−1.9) ⋅ (−0.2)) + 0.025 ⋅ 1.8925 = 0.0005625

−1.8925−0.09881⋅0.0005625 Hence we conclude that PV (Eb) = 0 + 1.02 = −1.85545. Now let’s value strategy Ew. Here

EEw (FCF ) = 0.5 ⋅ 0.01 ⋅ (−0.91) + 0.5 ⋅ 0.99 ⋅ 0 + 0.5 ⋅ 0.005 ⋅ (−0.91)+ + 0.5 ⋅ 0.995 ⋅ 0 = −0.006825

CovEw (rm, FCF ) = E(rm ⋅ FCF ) − E(rm)E(FCF ) = (0.5 ⋅ 0.01 ⋅ (−0.91) ⋅ 0.25+ + 0.5 ⋅ 0.99 ⋅ 0 ⋅ 0.25 + 0.5 ⋅ 0.005 ⋅ (−0.91) ⋅ (−0.2) + 0.5 ⋅ 0.99 ⋅ 0 ⋅ (−0.2))+ + 0.025 ⋅ 0.006825 = −0.00051187

−0.006815+0.09881⋅0.00051187 Hence we conclude that PV (Ew ) = 0 + 1.02 = −0.00664159. It follows that strategy Ew is best, and so PV (E) = PV (Ew) = −0.00664159 Time step T − 2: We have computed the (optimal) present values of cash flows at the end of the first year, which are arranged in Table 3.7.

Node Optimal PV B 0.0803922 C -1.8927 D -1.79235 E -0.006641

Table 3.7: Present Values at the end of the 1st year

Now we want to compute PV (A) and find optimal strategy. First, let’s value Ab. Here

EAb (FCF ) = 0.5 ⋅ 0.01 ⋅ (0.080293) + 0.5 ⋅ 0.99 ⋅ (−1.8927) + 0.5 ⋅ 0.005 ⋅ (0.0803922)+ + 0.5 ⋅ 0.995 ⋅ (−1.8927) = −1.8779

CovAb (rm, FCF ) = E(rm ⋅ FCF ) − E(rm)E(FCF ) = (0.5 ⋅ 0.01 ⋅ (−0.080293) ⋅ 0.25+ + 0.5 ⋅ 0.99 ⋅ (−1.8927) ⋅ 0.25 + 0.5 ⋅ 0.005 ⋅ (0.0802393) ⋅ (−0.2)+ + 0.5 ⋅ 0.99 ⋅ (−1.8927) ⋅ (−0.2)) + 0.025 ⋅ 1.8779 = = 0.00110986

−1.8779−0.09881⋅0.00110986 Hence we conclude that PV (Ab) = 0 + 1.02 = −1.84119.

54 Now let’s value Aw. Here

EAw (FCF ) = 0.5 ⋅ 0.01 ⋅ (−1.79235) + 0.5 ⋅ 0.99 ⋅ (−0.006641) + 0.5 ⋅ 0.005 ⋅ (−1.79235)+ + 0.5 ⋅ 0.995 ⋅ (−0.006641) = −0.020034

CovAw (rm, FCF ) = E(rm ⋅ FCF ) − E(rm)E(FCF ) = (0.5 ⋅ 0.01 ⋅ (−1.79235) ⋅ 0.25+ + 0.5 ⋅ 0.99 ⋅ (−0.006641) ⋅ 0.25 + 0.5 ⋅ 0.005 ⋅ (−1.79235) ⋅ (−0.2)+ + 0.5 ⋅ 0.99 ⋅ (−0.006641) ⋅ (−0.2))+ + 0.025 ⋅ 0.020034 = −0.00100446

−0.020034+0.09881⋅0.00100446 Hence we conclude that PV (Aw ) = 0 + 1.02 = −0.0195443. Thus strategy Aw is optimal and the value of this investment is −0.0195443. We conclude that the present value of the whole project is PV (A) = PV (Ab) = 0.0195443. To summarize the optimal strategy, it sounds as follows:

• Wait until the end of the first year

• If the development has occurred, build the extension

• If not, do not build the extension

This implies that the use of ‘options’ here actually adds some value compared to ‘static’ strategy, that gives advice about what to do independent of the state of the world. On practical side, this computation also tells us that even though everything is quite simple, the amount of computations one has to make can be quite significant even for the most simple example we could construct. However the approach works quite elegantly and can be applied to our problem.

55 56 Chapter 4

Results

In the previous chapter we have presented the approach to solve the decision problem we described in chapter 1. The problem is quite big with many developments and possible investments. The investment project should last about 20 years which makes the problem computationally challenging. Solving this problem by hand is out of the question. There is also no specialized software able to handle such extensive computations. Hence the program in C/C++ have been developed to handle efficiently all computations required to obtain results of this project. We have evaluated the project under two possible settings:

• Developments can occur every 2 years, and our decisions can be made every four years.

• Alternatively, developments can occur every 2 years and our decisions can be made every 2 years as well.

In the next section we give some information about software written for the purpose of this project. Then the results of the program will be discussed.

4.1 Program

We briefly describe the program that was written to solve the problem. The pro- gramme itself is written in C++, and the distinctive features of C++ like classes and templates not present in C are truly used here. The IDE being used is Microsoft Vi- sual C++ Express 2010; however, the code doesn’t seem to have any compiler-specific or platform-specific parts with the possible exception of the language extensions used for Monte Carlo simulation of market returns that we promised to discuss in section 3.1. The code was also compiled with g++ and executed in Mac OS X running on MacBook Air, producing the same results for all the tests. Running time on both platforms does not exceed 40 seconds for 18 years of simulation with time step for our decisions of 4 years and binomial model for market return. First of all, we needed a template tree class to store the data associated to nodes and represent paret/child relationship. For that we used the tree.hh library for C++ written by Kasper Peeters [9] and distributed under GNU General Public License v3. Internally, a template node class is defined that stores ‘data’ associated to the tree node and 5 pointers to other nodes, namely

57 • node parent

• previous sibling

• next sibling

• first child

• last child.

See Figure 4.1 for more details about the tree structure implemented with these pointers (dashed arrows correspond to ‘parent’ pointers). In Figure 4.2 you can see the class hierarchy for the library we’ve used, the figure is part of documentation generated with Doxygen, automated documentation generation software [11].

Figure 4.1: Node structure in the tree.

This class itself is a template class and so it was possible to ‘plug in’ an arbitrary data type as a node; this allows to store all the necessary data for every node of the tree. A custom structure gnode was defined that stores this necessary data, the definition can be found in the below; full source code of the program is provided in the Appendix. struct gnode {

58 Figure 4.2: Class hierarchy for tree.hh library. public: gnode() { for(int j=0;j

59 #ifdef INCL_PROB float p; // This variable can be used to store the probability of node ina tree #endif #ifdef INCL_YEAR unsigned int y; // This variable can be used to store the number of year we’re in now #endif #ifdef INCL_TAG bool opt; // This variable is true if the node is the node corresponding to optimal decision #endif

bool dec; #ifdef EMER_IMP bool em; // This is variable storing if we are in an emergency mode #endif

float cf; // This variable stores the cash flow float pv; // This variable stores the present value of upcoming cash flow sequence

unsigned char dev[NDEV]; // This variable store the state of every development in the area

unsigned char ext; // This variable stores every state of extension in the area friend ostream& operator << (ostream&, gnode&); // Overloaded << operator for printing node in classical std::cout fashion unsigned int getsupply(); // Function returning total supply unsigned int getdemand(); // Function returning total capacity demand unsigned int getconsdemand(); // Function returning total consumer capacity demand void check_emergency(); // Function updating the‘emergency’ state void updatecf(); // Function updating cash flows }; Notice that some variables appear in the class during running-time only if we force the preprocessor to do so with appropriate #define command. This was done to see how everything works and how additional data affects the memory performance. For the algorithm to produce correct output it is not essential for instance to store probabilities p.

60 The core algorithm consists of two ‘swings’: the forward swing and the backward swing.

• During forward swing the tree is built starting from the root, cash flows are com- puted

• During backward swing iterative rollback is applied, present values are computed using corresponding formulas. Optimal decision nodes are tagged as ‘optimal’.

Originally we tried to put all types of nodes in the tree including AEX nodes, this lead to insufficient performance. The key drawback was that we could only handle at most 6 years of the simulation as opposed to 15+ years we were originally aiming at. With our suggested approach, however, we can handle 18 years of simulation more or less easily on laptop, and even more is possible! A particularly good feature of the algorithm is that it can be effectively parallelized for MPI communication protocol, which is the dominant one in the area of parallel computing and ‘natural’ for cluster type of supercomputer architecture. Indeed, in Figure 4.3 it is easy to see that computations of present value of tree sections id1,..., id6 are independent from each other, and hence can be done in parallel on different computers (these different computers are called computing nodes of the cluster). Thus the ‘parallelized’ version of the core algorithm would still consist of two ‘swings’: the forward swing and the backward swing; the difference is that the whole tree is not stored on any single computing node, for the algorithm to work and produce results these computing nodes will communicate while growing the tree and rolling back the results.

• During forward swing the tree is built starting from the root. The part of the tree coming directly out of the root is stored on the root computing node, which sends commands to other computing nodes to build their part of the tree once it is ready. This procedure is the recursively applied on these nodes, etc.

• During backward swing iterative rollback is applied, present values are computed for corresponding parts of the tree and sent back to ‘parent’ computing nodes, that is, we only send back present values - and not whole subtrees. Optimal decision nodes are tagged as ‘optimal’.

Thus in our example during the forward swing computing node id0 would store the part labeled id0 in the picture, then send commands to nodes id1,...,id6 to grow ‘their’ part of the tree. Then during backward swing computing nodes id1,...,id6 would compute the Present Values and find optimal strategies, and then send these Present Values back to the node id0, which would complete the rollback and determine optimal strategy for ‘now’. However, we did not implement this parallel version of the algorithm; but it is still good to know that there’s a natural way to overcome the hardware limitations without changing the core algorithm or ‘adjusting’ the theory. The parallel version would be mostly desirable if we wanted to simulate a lengthy time period or take into account more developments and extensions. Finally, let’s describe the way we get the distribution of market returns in the program. Two approaches are implemented, namely

61 Figure 4.3: Possible data splitting for parallel algorithm

• Binomial model This is the approach discussed in the section 3.1. The key advantage of this approach is speed, however, we lack the opportunity to get finer approximations to the distribution of market return when it is desired.

• Monte Carlo approach This is an alternative code path, triggered by defining appropriate macro in the beginning of the main file. The Monte Carlo simulation we perform uses the TR1 extensions for the C++ standards. At the core of the sampling routine implemented in these extensions Mersenne Twister algorithm is found, which generates pseudo- random uniformly distributed numbers that are later used to obtain samples from the distribution we’re interested in. The key advantage of this approach is opportunity to get very fine approximation to the distribution through gathering many samples, the key disadvantage is speed. However, there’s no additional memory impact.

62 4.2 Program Output and Discussion of the Results

We will first compute the value of the project if all the developments occur right now and we build extensions immediately. This ‘present value’ is a very static scenario that can be compared to results obtained using Real Option valuation.

4.2.1 Area of Rijsenhout

Let us concentrate on the developments and extensions in the area of Rijsenhout. We suppose that we build everything right now (including the ‘emergency’ extension) and the developments occur afterwards. Then in the area of RSH we have total consumer capacity of 92 MVA that amounts to 6.9 M profit per year. Thus we can compute

18 6.9 NPV 17 30 56.4 18 = − − + Q i = i=1 (1 + 0.02)

Similarly, if we restrict time horizon to 16 years we get

16 6.9 NPV 17 30 46.68 16 = − − + Q i = i=1 (1 + 0.02)

The computations that the program has to perform are quite significant which can be seen in Table 4.1 where total size of the tree corresponding to developments in RSH are presented.

Year Time step for ext. = 4 years Time step for ext. = 2 years 1 1 1 2 133 133 3 133 133 4 1105 2883 5 1105 2883 6 13775 28223 7 13775 28223 8 45025 173438 9 45025 173438 10 263045 783894 11 263045 783894 12 599185 2848866 13 599185 2848866 14 2334310 8797866 15 2334310 8797866 16 4401025 23947671 17 4401025 - 18 13307769 -

Table 4.1: Total number of nodes in the RSH tree.

63 We can see that the data structure needed for computations of the problem grows exponentially, which justifies our decision to write a specially designed, memory efficient program instead of using a general-purpose decision tree analysis software. One can also see that the tree with time step for our decisions being equal to 2 years grows much faster, in particular we cannot calculate more then 16 years. Let’s concentrate on the binomial model for market return first. Strategy PV Build all extensions 50.23 Build extensions 1,2 -121.6 Build extension 1 -134.6

Table 4.2: 18 years, ext. step 4 years

In the Table 4.2 we present the main results for our problem in the RSH area, where binomial model is used for distribution of market returns. The notation used in this table is as follows: • ‘Build all extensions’ corresponds to the strategy when we build all available exten- sions immediately • ‘Build extensions 1,2’ corresponds to the strategy when we build extensions 1,2 immediately and postpone the 3rd one • ‘Build extension 1’ corresponds to the strategy when we build extension 1 immedi- ately and postpone the 2nd and 3rd ones It is clear that out of these three strategies the best one is building everything right now. This is quite expected as the developments in the area are quite likely to occur, even when economy is not too healthy (it is not growing). High penalties for insufficient capacity make it much more desirable to build all extensions immediately. Notice that the present value for this strategy given by the program (including uncertainty of the developments) is quite close to the one given by the simple computations when the devel- opments occur immediately (56.4 compared to 50.23) Again, the reason for that is that developments will occur with high probability. The alternative strategies do not perform so well. One of the reasons for that is that the ‘emergency mode’ that fixes significant lack of capacity in the area can only be triggered when we already have the 3rd network extension and cannot be initiated otherwise. The previous versions of the software (without ‘emergency mode’) also showed large negative present values of the project due to the lack of capacity in RSH and high probabilities of developments being constructed. In Table 4.3 we present the main results of the project in RSH area with time horizon equal to 16 years (as opposed to 18 in the previous case), where binomial model is used for market returns. First of all, observe that the optimal strategy is still ‘build everything right now’, but now the present value is smaller: the smaller the number of years, the less profit from consumers we receive. In fact, 50.23−40.83 ≈ 56.4−46.68 so we loose precisely the 2 years of profit coming from consumers. The answer given by ROV is again more or less the same as the simple NPV calculation.

64 Strategy PV Build all extensions 40.83 Build extensions 1,2 -130.8 Build extension 1 -143.74

Table 4.3: 16 years, ext. step 4 years

Strategy PV Build all extensions 40.83 Build extensions 1,2 -21.4 Build extension 1 -26.51

Table 4.4: 16 years, ext. step 2 years

We can see now (Table 4.4) results of analysis for RSH area with time horizon equal to 16 years and time step for our decisions equal to 2 years. It is interesting to compare it to the results in Table 4.3. Indeed, the optimal strategy is the same and has the same present value (recall that ‘emergency mode’ is triggered automatically). However, the present values of suboptimal strategies have changed, namely they have increased. The obvious reason for that is that with smaller time steps for our decision we have ‘more time’ to fix originally bad decision. It is also interesting to check how answer would change if the conditional probabilities of developments were smaller.

Strategy PV Build all extensions -4.02 Build extensions 1,2 -3.36 Build extension 1 -2.89 Do not build extensions -11.4

Table 4.5: 18 years, ext. step 4 years, reduced probabilities.

Table 4.5 presents the results for RSH area for time horizon of 18 years and time step for our decisions 4 years, where all probabilities of developments ‘going up’ are multiplied by 0.005. We see that it is no longer the best idea to build all extensions immediately, neither it is optimal to build nothing. The reason for the first is that we don’t get too much profit, the reason for the second is that we still may face significant penalties. The advised strategy here is building the 1st extension only. Perhaps another interesting test would be to see how the computed present values change when we change the parameters of the distribution of market return. We re- call that initially we assessed the expected annual market return to be 1.09 with the standard deviation 0.27. In the Table 4.6 below we present the results obtained when taking expected annual market return to be 1.02 and 1.16 respectively, while the standard deviation remains the same (0.27). It’s clear from the table that the advised strategy is still building all the extensions.

65 Strategy PV for rm = 0.02 PV for rm = 0.16 Build all extensions 57.19 41.1 Build extensions 1,2 -157.1 -94.3 Build extension 1 -173.8 -103.8

Table 4.6: 18 years, ext. step 4 years, Alternative parameters for rm

One can see that the present value of this optimal strategy have increased when we took smaller rm as compared to the results in the Table 4.2. A possible explanation would be that by reducing expected rate of market return, we have increased the present values of future cash flows, since the probabilities of developments occurring are still high. Similarly, the present value of this optimal strategy have decreased when we took larger rm as compared to the results in the Table 4.2. A possible explanation would be that by increasing expected rate of market return, we have decreased the present values of future cash flows, since the probabilities of developments occurring are still high. Now we can discuss the results produced by the Monte Carlo model for the distribution of market return. The results for the Rijsenhout area are provided in Table 4.7. Number of samples we took in each Monte Carlo simulation equals 100, and a couple of additional runs of whole simulation produced exactly the same numbers (thus suggesting that the number of samples is sufficient).

Strategy PV Build all extensions 47.36 Build extensions 1,2 -166.5 Build extension 1 -188.8

Table 4.7: 16 years, ext. step 4 years, Monte Carlo approach

Again, the advised strategy is building the extensions now. The present values remain quite close to the original ones and are perhaps a better proxy. The main drawback is that this simulation took several hours, as opposed to less then 30 seconds for the simple binomial model.

4.2.2 Houses Finally, we can come back houses. They require total consumer capacity of 6 MVA that amounts to 0.45 M profit per year. Thus we can compute

18 0.45 NPV 2 4.74 H = − + Q i = i=1 (1 + 0.02)

Quite similarly to this result, the program provides PV of 3.48 and advises to build the extension imediately.

66 Chapter 5

Conclusion

The goal of the thesis project was to see how the Real Option valuation techniques can be applied to a very practical problem Liander is dealing with, and how these new tools compare to more classical tools from the decision analysis. A few approaches were tried and rejected while working on the project and learning more about Real Options; in the end we’ve designed the suggested dynamic programming approach that solves the problem. There are a few concluding observations that we can make. First of all, we have seen that classical decision tree analysis with utility functions can be used to value this kind of projects. However, if we’re looking for the market value of the project, conventional approach with subjective utilities is perhaps not the best approach. Secondly, we observed that real options present a fancy way of dealing with future cash flows coming from dynamic value-maximizing strategy, which is often the case for big projects lasting many years. Thus finding value of the option is simply the natural thing to do if our project has some managerial flexibility in the future, where we can adjust our decision to newly acquired data, etc. Furthermore, this approach may reveal some hidden value of managerial flexibility in the project, as we have seen in the small example we’ve worked out when illustrating the Dynamic Programming Approach in the section 3.6. However, this is not always the case. As it happens with our ‘main’ investment, the provided penalties and high probabilities of developments together force us to build the extensions as soon as possible, and the present value given in this case becomes quite close to the naive deterministic assessment. Finally, on the technical side. We have seen that real option valuation of large and complicated investment like the one we’re considering is a challenging task. Not only one has to look for appropriate valuation tools from the financial world, but also to work hard to be able to implement it. Concerning possible improvements that we can imagine. The first possible improve- ment is the implementation of parallel version of the Dynamic Programming valuation algorithm, as we have already stated in the section 4.1. This can extend the time horizon of the simulation, allow for more developments and extensions and even speed up the com- putations. The second possible improvement is incorporating more adequate models for AEX with nonconstant volatility, perhaps through a certain combination of Monte Carlo

67 simulation for the AEX index with variable volatility and the Dynamic Programming approach we’ve suggested.

68 Appendix

Here we present the source code of the ‘main’ program, computing project value of the project in the area of RSH. The program consist of 3 files: main.cpp, gnode.h, gnode.cpp. main.cpp:

1 //#define TEST_MC 2 //#define BINAPP 3 //#define MCSTORE 4 #include 5 #include 6 #include 7 #include 8 #ifdef TEST_MC 9 #include 10 #define NSAMP 100 11 std::tr1::mt19937 eng; //a core engine class 12 std::tr1::normal_distribution ⤦ Ç dist(0.0586,0.2591); 13 #endif 14 //#ifdef TEST_MC 15 16 //#endif 17 #include "tree.hh" 18 #include "gnode.h" 19 #include "memory.h" 20 21 #define DO_AEX false 22 #define DO_DEV true 23 #define DO_EXT true 24 #define DO_PRINT 1 25 #define DO_RLBCK_PRINT 0 26 #define DO_PROB_PRINT 1 27 #define DO_OPT 0 28 #define DO_ROLLBACK 29 30 #define NYEARS 18 31 #define EXT_YSTEP 4 32 //#define NYEARS 18 33 //#define EXT_YSTEP4 34 35 // 10 years is already too much for AEX+DEV only

69 36 // 24 years for AEX-only 37 38 #define PDEPTH 2 39 40 #define PFACTOR 1.0 41 // 0.008 check 42 43 double aex_u, aex_d; 44 double aex_val[4]; 45 double aex_1[2]; 46 double aex_val_m[2][2]; 47 double aex_p[4]; 48 double lambda; 49 double exp_return; 50 double var_return; 51 double r_f; 52 53 // devmask: stores all possible combinations of ⤦ Ç area developments;1- to develop,0- not ⤦ Ç now. 54 short devmask[NDEVEXP][NDEV]; 55 56 // devincr: development increment- stores ⤦ Ç increment of percentage of developments, 1.0 ⤦ Ç by default 57 unsigned char devincr[NDEV]; 58 59 60 void getpq(double e1,double std1, double &p, ⤦ Ç double &q) 61 { 62 p = e1+std1; 63 q = e1-std1; 64 } 65 66 void econinit() 67 { 68 devincr[0]=127; 69 #ifndef HOUSES 70 devincr[1]=255; 71 devincr[2]=127; 72 devincr[3]=255; 73 devincr[4]=255; 74 #endif 75 // We initialize increments here of the ⤦ Ç development steps 76 // The true parameters are 1.0939,0.2704 77 getpq(1.02,0.2704,aex_u,aex_d); 78 aex_1[0]=aex_u;

70 79 aex_1[1]=aex_d; 80 aex_val[0] = aex_u*aex_u-1.0; 81 aex_val[1] = aex_u*aex_d-1.0; 82 aex_val[2] = aex_u*aex_d-1.0; 83 aex_val[3] = aex_d*aex_d-1.0; 84 85 aex_p[0] = 0.25; 86 aex_p[1] = 0.25; 87 aex_p[2] = 0.25; 88 aex_p[3] = 0.25; 89 90 r_f = 0.02; 91 92 exp_return = var_return = 0.0; 93 double tv1,tv2; 94 #ifndef TEST_MC 95 for(int i=0;i<4;i++) 96 exp_return+=aex_p[i]*aex_val[i]; 97 for(int i=0;i<4;i++) 98 var_return+=aex_p[i]*aex_val[i]*aex_val[i]; 99 var_return-=(exp_return*exp_return); 100 #endif 101 #ifdef TEST_MC 102 for(int i=0;i &t, int dep) 277 { 278 tree::iterator sib2=t.begin(); 279 tree::iterator end2=t.end(); 280 while(sib2!=end2) { 281 if(t.depth(sib2)<=dep) { 282 for(int i=0; i

75 310 cout< tr; 317 tree::iterator top,root,z; 318 319 top=tr.begin(); 320 root = tr.insert(top,gnode()); 321 322 unsigned char narr[NDEV]; 323 // tlist stores double-linked list of leaf nodes ⤦ Ç of the tree 324 deque::iterator> tlist; 325 root->cf = -12.5f; 326 tlist.push_front(root); 327 int sz=1; 328 for(int y=1;y<=NYEARS;y++) 329 { 330 cout<<"Year"<::iteratort= tlist.front(); 338 tlist.pop_front(); 339 gnode tn=( *t); 340 //tn.aex*=aex_u; 341 tree::iterator fc= tr.append _child(t,tn); 342 tlist.push_back(fc); 343 tn=( *t); 344 //tn.aex*=aex_d; 345 fc= tr.append _child(t,tn); 346 tlist.push_back(fc); 347 } 348 } 349 */ 350 351 /* 352 PERFORM DECISION NODE SPLITTING,?EVERY YEAR? 353 */ 354 if(DO_EXT && ((y-1)%EXT_YSTEP) == 0) { 355 bool expos=false; 356 sz = tlist.size(); 357 for(int i=0;i

76 358 bool devoc = false; 359 tree::iterator t = tlist.front(); 360 //for(intj=0;j<8;j++){ 361 for(int j=0;j<4;j++) { 362 unsigned char tchr; 363 float tcf = 0.0f; 364 expos=extendpossible((*t).ext, (unsigned ⤦ Ç char)j, &tchr, tcf); 365 if(expos) { 366 devoc = true; 367 gnode tn = (*t); 368 //memcpy(tn.dev,narr,sizeof(char)*NDEV); 369 tn.ext = tchr; 370 #ifdef IMPL_LEN 371 if(abs(tcf)>0.001) 372 tn.ib = 1; 373 #endif 374 375 tree::iterator fc = ⤦ Ç tr.append_child(t,tn); 376 fc->dec=true; 377 fc->cf = tcf; 378 fc->check_emergency(); 379 fc->pv = tcf; 380 #ifdef INCL_YEAR 381 fc->y = y; 382 #endif 383 #ifdef INCL_TAG 384 fc->opt = false; 385 #endif 386 tlist.push_back(fc); 387 } 388 } 389 if(devoc) 390 tlist.pop_front(); 391 } 392 } 393 394 395 /* 396 PERFORM DEVELOPMENTS SPLITTING;?EVERY YEAR? 397 */ 398 399 if(DO_DEV && ((y-1)%2) == 0) { 400 sz = tlist.size(); 401 for(int i=0;i::iterator t = tlist.front(); 404 for(int j=0;j

77 405 bool devpos=buildpossible((*t).dev, ⤦ Ç &devmask[j][0], narr); 406 if(devpos) { 407 double tp = getprob((*t).dev,narr,1.0,1.0); 408 devoc = true; 409 gnode tn = (*t); 410 // tn.updatecf(); 411 memcpy(tn.dev,narr,sizeof(char)*NDEV); 412 tn.updatecf(); 413 tree::iterator fc = ⤦ Ç tr.append_child(t,tn); 414 // fc->updatecf(); 415 fc->pv = fc->cf; 416 #ifdef INCL_PROB 417 fc->p *= tp; 418 #endif 419 #ifdef INCL_YEAR 420 fc->y = y; 421 #endif 422 fc->dec=false; 423 tlist.push_back(fc); 424 } 425 } 426 if(devoc) 427 tlist.pop_front(); 428 } 429 } 430 431 } 432 433 434 435 #ifdef DO_ROLLBACK 436 sz = tlist.size(); 437 tree::iterator t = tlist.front(); 438 tree::iterator ts = t; 439 double eexp,ecov; 440 vector::iterator> svec; 441 int nz = 0; 442 cout<<"Size="<1) { 444 if(DO_RLBCK_PRINT) 445 cout<parent); 450 for(int j=0;j<=sz;j++) 451 {

78 452 t=tlist.front(); 453 if(j!=sz) { 454 tlist.pop_front(); 455 } 456 if(((t.node->parent) != (ts.node->parent)) || ⤦ Ç ((j==sz) && (t != root))) 457 { 458 if(DO_RLBCK_PRINT) 459 cout<<"t="<<(*t)<<",ts="<<(*ts)<parent->data.dec == false){ 463 if(svec[0].node->data.dec == false){ 464 // First, handle rolling back uncertainty 465 // CALCULATE EXPECTATION 466 #ifndef TEST_MC 467 for(int k=0;k<2;k++) 468 for(int l=0;l<2;l++) 469 for(int i=0;iparent->data.dev, ⤦ Ç (*svec[i]).dev, aex_1[k], aex_1[l]); 471 } 472 #else 473 for(int k=0;kparent->data.dev, ⤦ Ç (*svec[i]).dev, exp(dist(eng)), ⤦ Ç exp(dist(eng)))/(NSAMP*NSAMP); 477 } 478 #endif 479 480 481 // CALCULATE COVARIABCE 482 #ifndef TEST_MC 483 for(int k=0;k<2;k++) 484 for(int l=0;l<2;l++) 485 for(int i=0;iparent->data.dev, ⤦ Ç (*svec[i]).dev, aex_1[k], ⤦ Ç aex_1[l])*(aex_1[k]*aex_1[l]-1.0); 487 } 488 #else 489 double tv1,tv2; 490 for(int k=0;k

79 492 for(int i=0;iparent->data.dev, ⤦ Ç (*svec[i]).dev, tv1, ⤦ Ç tv2)*(tv1*tv2-1.0)/(NSAMP*NSAMP); 496 } 497 #endif 498 ecov -= eexp*exp_return; 499 500 if(DO_RLBCK_PRINT) 501 cout<<"Rolling back to node ⤦ Ç "<<(ts.node->parent->data)<parent->data.pv += ((eexp - ⤦ Ç lambda*ecov)/(1.0+2.0*r_f)); 503 if(DO_RLBCK_PRINT) { 504 cout<<"... updated node is ⤦ Ç "<<(ts.node->parent->data)<parent->data)<data.pv>valmax) { 515 valmax = svec[i].node->data.pv; 516 optstrat=i; 517 } 518 } 519 svec[optstrat].node->data.opt = true; 520 svec[0].node->parent->data.pv += valmax; 521 } 522 if(DO_RLBCK_PRINT) { 523 cout<<"Sibling list:"<

85 43 //cout_ << "[" << obj_.aex << "," << obj_.getsupply() << "," << ⤦ Ç obj_.getdemand()<<","; 44 cout_ <<"["; 45 #ifdef PRINT_CF 46 cout_<< obj_.cf<<","; 47 #endif 48 #ifdef PRINT_PV 49 cout_<< obj_.pv<<","; 50 #endif 51 #ifdef PRINT_SUPDEM 52 cout_<< obj_.getsupply() <<"," << obj _.getdemand()<<","; 53 #endif 54 #ifdef INCL_YEAR 55 cout_<

86 91 } 92 93 unsigned int gnode::getconsdemand() 94 { 95 double dm = 0.0; 96 for(int i=0;i

87 140 //#endif 141 142 if(COUNT_PENALTY && td>ts) 143 cf-=PENALY*YINCR; 144 145 if((ts>getconsdemand()) && COUNT_PROFIT){ 146 cf+=getconsdemand()*PROF*YINCR; 147 } 148 149 }

88 Bibliography

[1] David G. Luenberger, Investment Science, Oxford University Press, 1998 [2] Tom Copeland, Vladimir Antikarov, Real Options: A practioneer’s guide, TEXERE Publishing, 2001. [3] Bert De Reyck, Zeger Degraeve, Roger Vandenborre Project options valuation with net present value and decision tree analysis, European Journal of Operational Research, 184(2008) 341-355 [4] Mark Grinblatt, Sheridan Titman, Financial Markets and Corporate Strategy, 2nd edition, The McGraw-Hill Companies, 2002 [5] Adam Borison, Real Options Analysis: Where are the Emperor’s clothes?, DRAFT 05/17/03, Presented at Real Options Conference, Washington, DC July 2003 [6] Tom Arnold; Richard L. Shokley, Jr., Real Options Analysis and the Assumptions of the NPV rule, PRELIMINARY DRAFT, 2002 [7] John Lintner, The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets, The Review of Economics and Statistics, Vol. 47, No. 1 (Feb., 1965), pp. 13-37 [8] William F. Sharpe, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk, The Journal of Finance, Vol. 19, No. 3 (Sep., 1964),pp. 425-442 [9] Kasper Peeters, tree.hh library for C++, http://tree.phi-sci.com/ [10] Ahmed Ben Saida, Shapiro-Wilk and Shapiro- normality tests, http://www.mathworks.com/matlabcentral/fileexchange/ 13964-shapiro-wilk-and-shapiro-francia-normality-tests/content/ swtest.m

[11] Dimitri van Heesch, doxygen v.1.8.1.2, A documentation system for C++, C, Java, Objective-C, Python, IDL, Fortran, VHDL, PHP, C#, and to some extent D, http: //www.stacks.nl/~dimitri/doxygen/index.html

[12] R. T. Peters, Financial time and volatility, UvA, Faculty of Science, Dissertation, 2004 [13] Prasad Kodukula, Chandra Papudesu, Project Valuation Using Real Options: a pracioneer’s guide, J. Ross Publishing, 2006

89