
The Journal of Risk Model Validation (65–81) Volume 2/Number 4, Winter 2008/09 Measuring model risk Philipp Sibbertsen Department of Economics, Leibniz University Hannover, Königsworther Platz 1, 30167 Hannover, Germany; email: [email protected] Gerhard Stahl Talanx AG, Riethorst 2, 30659 Hannover, Germany; email: [email protected] and Ulm University, Germany Corinna Luedtke Department of Economics, Leibniz University Hannover, Königsworther Platz 1, 30167 Hannover, Germany; email: [email protected] Model risk as part of operational risk is a serious problem for financial institutions. As the pricing of derivatives as well as the computation of the market or credit risk of an institution depend on statistical models, the application of a wrong model can lead to a serious over- or underestimation of the institution’s risk. Because the underlying data-generating process is unknown in practice, evaluating the model risk is a challenge. So far, definitions of model risk have been either application-oriented, including risk induced by the statistician rather than by the statistical model, or so research-oriented as to prove too abstract to be used in practice. They are especially prone not to be data driven. We introduce a data-driven notion of model risk that includes the features of the research-oriented approach, extending it by a statistical model-building procedure and thus compromising between the two definitions at hand. We further suggest the application of robust estimates to reduce the model risk, and advocate the application of stress tests with respect to the valuation of the portfolio. 1 INTRODUCTION Model risk emerged as a new risk category with the advent of elaborate and complex mathematical and statistical models during the late 1970s, when Black, Scholes and Merton laid the path for a new field of science – mathematical finance. It is under the focus of various stakeholders of so-called internal models: regulators, rating agencies, investors, shareholders, bondholders, chartered accountants and the firm’s board. Therefore, model risk can hardly be overestimated. A quote from a sector report of the UniCreditGroup (2008) states that some investors “tend to apply an across-the-board discount of about 20% to the published numbers”; this tendency is said to be due to investor skepticism and lack of transparency, giving the reader a sense of its economic relevance. The urge for better models stemmed from a riskier environment for financial institutions after the breakdown of the Bretton Woods system and the steadily 65 66 P. Sibbertsen et al FIGURE 1 Risk measurement procedure (Cont et al (2007)). Dρ M ρ X R ρˆ = ρ M growing interdependence of the world economy. Financial institutions were grad- ually exposed to higher risks and faced new risk categories. At the same time, the huge increase in computer capacity and capabilities led to better possibilities for calculating these models. An overview of the technical progress and its implications for financial markets is given in Marshall (2000). Alexander (2003) and Padoa- Schioppa (2004) show the rationale of regulation, especially of so-called internal models for banks. In 1998, regulatory authorities acknowledged these models for regulatory purposes. Since then they have been widespread in practice. However, a model is only capable of capturing features of reality andO is subject to the modeler’s preferences and abilities. Even highly sophisticated models capturing stylized facts such as volatility clustering or long memory are confronted with mapping errors when implemented in practice. The role of mapping errors is depicted in Figure 1, where X denotes the data set, M is the estimator of the loss distribution, Dρ is the domain and ρ is the risk measure. The combination ρˆ = ρ ◦ M : X → R allows for the direct computation of a risk measure estimate. The final output of an internal model is a realization xt+h associated with a forecast distribution: L(πt+h | It ) and: L(πt+h | It ,Zt ) respectively, where πt+h denotes the variable of interest, typically a portfolio that is held over a time span h. The forecast is drawn on two different types of information sets. The first, It , comprises a history of risk factors, eg, a time series of bond prices. The second, Zt , which is typically used by insurers under Solvency II, is of managerial type, eg, management rules for splitting losses between shareholders and policyholders. But this is only half of the output. The other is related to the valuation of πt+h: v(πt+h) Figure 2, taken from Baumgartner et al (2004), shows the interplay between the model and the various valuations. From this representation one concludes that in the insurance industry mark-to-model valuation process is a general approach to The Journal of Risk Model Validation Volume 2/Number 4, Winter 2008/09 Measuring model risk 67 FIGURE 2 Interplay between the model and the various valuations. Reality Model Mapping Insurance policy X Price/value? ϕ valuation A R F Accounting identifying valuation v(πt+h). Compared to those methods applied in the banking industry, an additional accounting step is applied. Note that for both outputs, the forecast and the valuation, models and hence model risk are involved. Bear in mind that forecast and valuation are two different problems. Whereas for the valuation the underlying probability distribution of the model is of interest, in the forecast situation a complete forecast distribution is derived. Derman (1996) was the first one who referred to model risk in the field of financial engineering, stating that models give at least a rough approximation of reality. Given the relevance of that topic, it is surprising that the number of publications devoted to model risk is rather small. Rebonato (2001) defined model risk as ... the risk of occurrence at a given point in time (today or in the future) of a significant difference between the mark-to-model value of a complex and/or illiquid instrument held on or off the balance sheet of a financial institution and the price at which the same instrument is revealed to have traded in the market – by brokers’ quotes or reliable intelligence of third-party market transactions – after the appropriate provisions have been taken into account. This definition refers only to a financial instrument but neglects errors made when forecasting the value of the institution’s portfolio. Note also that value-at-risk (VaR) models are large-scale models, whereas pricing models are comparatively small. We assume that model risk stems from the discrepancy between the implemented data-generating process and the data that is at hand or that may be gathered in the future. In the following we focus on a definition that comprises estimation errors and misspecification risks, but we consider implementation risk only insofar as this type of error might contaminate our data. Research Paper www.thejournalofriskmodelvalidation.com 68 P. Sibbertsen et al The quantification of the model risk is also still in its infancy. This problem is most obvious in the context of financial risk management and portfolio valuation, especially of derivative instruments. This paper aims at developing a working definition of model risk and giving an overview of the current practice of model risk quantification. Section 2 focuses on the main types that the term “model risk” comprises. Sections 3 and 4 review the actual practice of model risk quantification in the market risk measurement and derivatives’ valuation context. Section 5 discusses robust methods to avoid model risk due to biased estimates because of data contaminations, and sketches the application of stress tests. Section 6 concludes. 2 TYPES OF MODEL RISK Although the importance of model risk comes more and more into the focus of practitioners as well as researchers, so far there is no clear notion of how model risk should be defined. Basically there are two different, extreme notions of model risk. A more practice-oriented viewpoint is that everything that might be related in one way or another to the statistical model used is part of the model risk. This includes data contaminations as well as a wrong implementation or a wrong choice of starting values. A wrong application of the statistical model is also seen as a source of model risk. In this construct, even the behavior, preferences and abilities of the statistician are seen as part of the model risk. For an overview of this notion of model risk, see Crouhy et al (1998). Although a wrong implementation of the model is definitely a problem and poses a risk for the financial institution, this should not in our opinion be part of the risk. Therefore, in our setup these sources of risk are treated as part of the operational risk of the financial institution, but not as part of the model risk. The other line of definition is rather research-oriented and strictly mathematical; see Kerkhof et al (2002), who define a statistical model as a probability space. Although this approach allows for a high generality, it does not seem to be handy for practical purposes. The probability space of a model uniquely defines a model and thus contains all the necessary information. However, it is a very abstract approach, not giving insight into the actual statistical modeling procedure. This is mainly because a new parameter space is needed for every parameter constellation and model specification to avoid identification problems. Therefore, the set of valid probability spaces to be evaluated has to be massive, and it is impossible to set it to sensible limits in most practical situations. Another drawback is that it focuses on the distributional properties, whereas in the situation of forecasts, the forecast distribution is not the relevant information, as we will discuss later.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-