Higher Moment Coherent Risk Measures∗
Total Page:16
File Type:pdf, Size:1020Kb
Higher Moment Coherent Risk Measures∗ Pavlo A. Krokhmal Department of Mechanical and Industrial Engineering The University of Iowa, 2403 Seamans Center, Iowa City, IA 52242 E-mail: [email protected] April 2007 Abstract The paper considers modeling of risk-averse preferences in stochastic programming problems using risk mea- sures. We utilize the axiomatic foundation of coherent risk measures and deviation measures in order to develop simple representations that express risk measures via specially constructed stochastic programming problems. Us- ing the developed representations, we introduce a new family of higher-moment coherent risk measures (HMCR), which includes, as a special case, the Conditional Value-at-Risk measure. It is demonstrated that the HMCR mea- sures are compatible with the second order stochastic dominance and utility theory, can be efficiently implemented in stochastic optimization models, and perform well in portfolio optimization case studies. Keywords: Risk measures, stochastic programming, stochastic dominance, portfolio optimization 1 Introduction Research and practice of portfolio management and optimization is driven to a large extent by tailoring the mea- sures of reward (satisfaction) and risk (unsatisfaction/regret) of the investment venture to the specific preferences of an investor. While there exists a common consensus that an investment’s reward may be adequately associated with its expected return, the methods for proper modeling and measurement of an investment’s risk are subject to much more pondering and debate. In fact, the risk-reward or mean-risk models constitute an important part of the investment science subject and, more generally, the field of decision making under uncertainty. The cornerstone of modern portfolio analysis was set up by Markowitz (1952, 1959), who advocated identification of the portfolio’s risk with the volatility (variance) of its returns. On the other hand, Markowitz’s work led to formalization of the fundamental view that any decision under uncertainties may be evaluated in terms of its risk and reward. The seminal Markowitz’s ideas are still widely used today in many areas of decision making, and the entire paradigm of bi-criteria “risk-reward” optimization has received extensive development in both directions of increasing the computational efficiency and enhancing the models for risk measurement and estimation. At the same time, it has been recognized that the symmetric attitude of the classical Mean-Variance (MV) approach, where both the “positive” and “negative” deviations from the expected level are penalized equally, does not always yield an adequate estimation of risks induced by the uncertainties. Hence, significant effort has been devoted to the development of downside risk measures and models. Replacing the variance by the lower standard semi-deviation as a measure of investment risk so as to take into account only “negative” deviations from the expected level has been proposed as early as by Markowitz (1959); see also more recent works by Ogryczak and Ruszczynski´ (1999, 2001, 2002). ∗Supported in part by NSF grant DMI 0457473. 1 Among the popular downside risk models we mention the Lower Partial Moment and its special case, the Expected Regret, which is also known as Integrated Chance Constraint in stochastic programming (Bawa, 1975; Fishburn, 1977; Dembo and Rosen, 1999; Testuri and Uryasev, 2003; van der Vlerk, 2003). Widely known in finance and banking industry is the Value-at-Risk measure (JP Morgan, 1994; Jorion, 1997; Duffie and Pan, 1997). Being sim- ply a quantile of loss distribution, the Value-at-Risk (VaR) concept has its counterparts in stochastic optimization (probabilistic, or chance programming, see Prekopa,´ 1995), reliability theory, etc. Yet, minimization or control of risk using the VaR measure proved to be technically and methodologically difficult, mainly due to VaR’s notorious non-convexity as a function of the decision variables. A downside risk measure that circumvents the shortcomings of VaR while offering a similar quantile approach to estimation of risk is the Conditional Value-at-Risk measure (Rockafellar and Uryasev, 2000, 2002; Krokhmal et al., 2002a). Risk measures that are similar to CVaR and/or may coincide with it, are Expected Shortfall and Tail VaR (Acerbi and Tasche, 2002), see also Conditional Drawdown- at-Risk (Chekhlov et al., 2005; Krokhmal et al., 2002b). A simple yet effective risk measure closely related to CVaR is the so-called Maximum Loss, or Worst-Case Risk (Young, 1998; Krokhmal et al., 2002b), whose use in problems with uncertainties is also known as the robust optimization approach (see, e.g., Kouvelis and Yu, 1997). In the last few years, the formal theory of risk measures received a major impetus from the works of Artzner et al. (1999) and Delbaen (2002), who introduced an axiomatic approach to definition and construction of risk measures by developing the concept of coherent risk measures. Among the risk measures that satisfy the coherency properties, there are Conditional Value-at-Risk, Maximum Loss (Pflug, 2000; Acerbi and Tasche, 2002), coherent risk measures based on one-sided moments (Fischer, 2003), etc. Recently, Rockafellar et al. (2006) have extended the theory of risk measures to the case of deviation measures, and demonstrated a close relationship between the coherent risk measures and deviation measures; spectral measures of risk have been proposed by Acerbi (2002). An approach to decision making under uncertainty, different from the risk-reward paradigm, is embodied by the von Neumann and Morgenstern (vNM) utility theory, which exercises mathematically sound axiomatic description of preferences and construction of the corresponding decision strategies. Along with its numerous modifications and extensions, the vNM utility theory is widely adopted as a basic model of rational choice, especially in eco- nomics and social sciences (see, among others, Fishburn, 1970, 1988; Karni and Schmeidler, 1991, etc). Thus, substantial attention has been paid in the literature to the development of risk-reward optimization models and risk measures that are consistent with expected utility maximization. In particular, it has been shown that under certain conditions the Markovitz MV framework is consistent with the vNM theory (Kroll et al., 1984). Ogryczak and Ruszczynski´ (1999, 2001, 2002) developed mean-semideviation models that are consistent with stochastic dom- inance concepts (Fishburn, 1964; Rothschild and Stiglitz, 1970; Levy, 1998); a class of risk-reward models with SSD-consistent coherent risk measures was discussed in De Giorgi (2005). Optimization with stochastic dom- inance constraints was recently considered by Dentcheva and Ruszczynski´ (2003); stochastic dominance-based portfolio construction was discussed in Roman et al. (2006). In this paper we aim to offer an additional insight into the properties of axiomatically defined measures of risk by developing a number of representations that express risk measures via solutions of stochastic programming prob- lems (Section 2.1); using the developed representations, we construct a new family of higher-moment coherent risk (HMCR) measures. In Section 2.2 it is demonstrated that the suggested representations are amenable to seam- less incorporation into stochastic programming problems. In particular, implementation of the HMCR measures reduces to p-order conic programming, and can be approximated via linear programming. Section 2.3 shows that the developed results are applicable to deviation measures, while section 2.4 illustrates that the HMCR measures are compatible with the second-order stochastic dominance and utility theory. The conducted case study (Section 3) indicates that the family of HMCR measures has a strong potential for practical application in portfolio selection problems. Finally, the Appendix contains the proofs of the theorems introduced in the paper. 2 Modeling of risk measures as stochastic programs The discussion in the Introduction has illustrated the variety of approaches to definition and estimation of risk. Arguably, the recent advances in risk theory are associated with the axiomatic approach to construction of risk measures pioneered by Artzner et al. (1999). The present endeavor essentially exploits this axiomatic approach in 2 order to devise simple computational recipes for dealing with several types of risk measures by representing them in the form of stochastic programming problems. These representations can be used to create new risk measures to be tailored to specific risk preferences, as well as to incorporate these preferences into stochastic programming problems. In particular, we present a new family of Higher Moment Coherent Risk measures (HMCR). It will be shown that the HMCR measures are well-behaved in terms of theoretical properties, and demonstrate very promising performance in test applications. Within the axiomatic framework of risk analysis, risk measure R(X) of a random outcome X from some prob- ability space (, F , µ) may be defined as a mapping R : X R, where X is a linear space of F -measurable 7→ functions X : R. In a more general setting one may assume X to be a separated locally convex space; for 7→ our purposes it suffices to consider X Lp(, F , P), 1 p , where the particular value of p shall be clear = ≤ ≤ ∞ from the context. Traditionally to convex analysis, we call function f : X R proper if f (X) > for all 7→ −∞ X X and dom f ∅, i.e., there exists X X such that f (X) < (see, e.g.,