Credit Risk Evaluation Modeling – Analysis – Management

Total Page:16

File Type:pdf, Size:1020Kb

Credit Risk Evaluation Modeling – Analysis – Management Credit Risk Evaluation Modeling – Analysis – Management INAUGURAL-DISSERTATION ZUR ERLANGUNG DER WÜRDE EINES DOKTORS DER WIRTSCHAFTSWISSENSCHAFTEN DER WIRTSCHAFTSWISSENSCHAFTLICHEN FAKULTÄT DER RUPRECHT-KARLS-UNIVERSITÄT HEIDELBERG VORGELEGT VON UWE WEHRSPOHN AUS EPPINGEN HEIDELBERG, JULI 2002 This monography is available in e-book-format at http://www.risk-and-evaluation.com. This monography was accepted as a doctoral thesis at the faculty of economics at Heidelberg University, Ger- many. © 2002 Center for Risk & Evaluation GmbH & Co. KG Berwanger Straße 4 D-75031 Eppingen www.risk-and-evaluation.com 2 Acknowledgements My thanks are due to Prof. Dr. Hans Gersbach for lively discussions and many ideas that con- tributed essentially to the success of this thesis. Among the many people who provided valuable feedback I would particularly like to thank Prof. Dr. Eva Terberger, Dr. Jürgen Prahl, Philipp Schenk, Stefan Lange, Bernard de Wit, Jean-Michel Bouhours, Frank Romeike, Jörg Düsterhaus and many colleagues at banks and consulting companies for countless suggestions and remarks. They assisted me in creating the awareness of technical, mathematical and economical problems which helped me to formulate and realize the standards that render credit risk models valuable and efficient in banks and financial institutions. Further, I gratefully acknowledge the profound support from Gertrud Lieblein and Computer Sciences Corporation – CSC Ploenzke AG that made this research project possible. My heartfelt thank also goes to my wife Petra for her steady encouragement to pursue this extensive scientific work. Uwe Wehrspohn 3 Introduction In the 1990ies, credit risk has become the major concern of risk managers in financial institu- tions and of regulators. This has various reasons: • Although market risk is much better researched, the larger part of banks’ economic capital is generally used for credit risk. The sophistication of traditional standard methods of measurement, analysis, and management of credit risk might, there- fore, not be in line with its significance. • Triggered by the liberalization and integration of the European market, new chan- nels of distribution through e-banking, financial disintermediation, and the en- trance of insurance companies and investment funds in the market, the competitive pressure upon financial institutions has increased and led to decreasing credit mar- gins1. At the same time, the number of bankruptcies of companies stagnated or in- creased2 in most European countries, leading to a post-war record of insolvencies in 2001 in Germany3. • A great number of insolvencies and restructuring activities of banks were influ- enced by prior bankruptcies of creditors. In the German market, prominent exam- ples are the Bankgesellschaft Berlin (2001), the Gontard-MetallBank (2002), the Schmidtbank (2001), and many mergers among regional banks4 to avoid insol- vency or a shut down by regulatory authorities. The thesis contributes to the evaluation and development of credit risk management methods. First, it offers an in-depth analysis of the well-known credit risk models Credit Metrics (JP Morgan), Credit Risk+ (Credit Suisse First Boston), Credit Portfolio View (McKinsey & Company) and the Vasicek-Kealhofer-model5 (KMV Corporation). Second, we develop the Credit Risk Evaluation model6 as an alternative risk model that overcomes a variety of defi- ciencies of the existing approaches. Third, we provide a series of new results about homoge- nous portfolios in Credit Metrics, the KMV model and the CRE model that allow to better 1 Bundesbank (2001). 2 Creditreform (2002), p. 4. 3 Creditreform (2002), p. 16. 4 Between 1993 and 2000 1,000 out of 2,800 Volks- und Raiffeisenbanken and 142 out of 717 savings banks ceased to exist in Germany (Bundesbank 2001, p. 59). All of them merged with other banks so that factual insolvency could be avoided in all cases. Note that shortage of regulatory capital in consequence of credit losses was not the reason for all of these mergers. Many of them were motivated to achieve cost reduction and were carried out for other reasons. 5 We refer to the Vasicek-Kealhofer-model also as the KMV model. 6 Credit Risk Evaluation model is a trademark of the Center for Risk & Evaluation GmbH & Co. KG, Heidelberg. We re- fer to the Credit Risk Evaluation model also as the CRE model. 4 understand and compare the models and to see the impact of modeling assumptions on the reported portfolio risk. Fourth, the thesis covers all methodological steps that are necessary to quantify, to analyze and to improve the credit risk and the risk adjusted return of a bank port- folio. Conceptually, the work follows the risk management process that comprises three major as- pects: the modeling process of the credit risk from the individual client to the portfolio (the qualitative aspect), the quantification of portfolio risk and risk contributions to portfolio risk as well as the analysis of portfolio risk structures (the quantitative aspect), and, finally, meth- ods to improve portfolio risk and its risk adjusted profitability (the management aspect). The modeling process The modeling process includes the identification, mathematical description and estimation of influence factors on credit risk. On the level of the single client these are the definitions of default7 and other credit events8, the estimation of default probabilities9, the calculation of credit exposures10 and the estimation of losses given default11. On the portfolio level, depend- encies and interactions of clients need to be modeled12. The assessment of the risk models is predominantly an analysis of the modeling decisions taken and of the estimation techniques applied. We show that all of the four models have con- siderable conceptual problems that may lead to an invalid estimation, analysis and pricing of portfolio risk. In particular, we identify that the techniques applied for the estimation of default probabilities and related inputs cause systematic errors in Credit Risk+13 and Credit Portfolio View14 if certain very strict requirements on the amount of available data are not met even if model assumptions are assumed to hold. If data is sparse, both models are prone to underestimate default probabilities and in turn portfolio risk. For Credit Metrics and the KMV model, it is shown that both models lead to correct results if they are correctly specified. The concept of dependence that is common to both models – called the normal correlation model – can easily be generalized by choosing a non-normal 7 See section I.A. 8 I.e. of rating transitions, see sections I.B.4, I.B.6.c)(4), I.B.7. 9 See section I.B. 10 Section I.C. 11 Section I.D. 12 See Section II.A. 13 Section I.B.5 14 Section I.B.6 5 distribution for joint asset returns. As one of the main results, we prove for homogenous port- folios that the normal correlation model is precisely the risk minimal among of all possible generalizations of this concept of dependence. This implies that even if the basic concept of dependence is correctly specified, Credit Metrics and the Vasicek-Kealhofer model systemati- cally underestimate portfolio risk if there is any deviation from the normal distribution of as- set returns. Credit Risk+ has one special problem regarding the aggregation of portfolio risk15. It is the only model whose authors intend to avoid computer simulations to calculate portfolio risk and attain an analytical solution for the portfolio loss distribution. For this reason, the authors choose a Poisson approximation of the distribution of the number of defaulting credits in a portfolio segment. As a consequence each segment contains an infinite number of credits. This hidden assumption may lead to a significant overestimation of risk in small segments, e.g. when the segment of very large exposures in a bank portfolio is considered that is usually quite small. Thus, Credit Risk+ is particularly suited for very large and homogenous portfo- lios. However, at high percentiles, the reported portfolio losses even always exceed the total portfolio exposure. With the Credit Risk Evaluation model, we present a risk model that avoids these pitfalls and integrates a comprehensive set of influence factors on an individual client’s risk and on the portfolio risk. In particular, the CRE model captures influences on default probabilities and dependencies such as the level of country risk, business cycle effects, sector correlations and individual dependencies between clients. This leads to an unbiased and more realistic estima- tion of portfolio risk16. The CRE model also differs from the other models with respect to the architecture, which is modular in contrast to the monolithic design in other models. This means that the corner- stones of credit risk modeling such as the description of clients’ default probabilities, expo- sures, losses given default, and dependencies are designed as building blocks that interact in certain ways, but the methods in each module can be exchanged and adjusted separately. This architecture has the advantage that, by choosing appropriate methods in each component, the overall model may be flexibly adapted to the type and quality of the available data and to the structure of the portfolio to be analyzed. 15 Section II.A.3.a) 16 Sections I.B.7 and II.A.2 6 For instance, if the portfolio is large and if sufficiently long histories of default data are avail- able, business cycle effects on default probabilities can be assessed in the CRE model. Other- wise, more simple methods to estimate default can be applied such as long term averages of default frequencies etc. Similarly, country risk typically is one of the major drivers of portfo- lio risk of internationally operating banks. In turn, these banks should use a risk model that can capture its effect17. Regional banks, on the other hand, might not have any exposures on an international scale and, therefore, may well ignore country risk. Moreover, an object-oriented software implementation of the model can directly follow its conceptual layout.
Recommended publications
  • Basel Violations, Volatility Model Variants and Value at Risk: Optimization of Performance Deviations in Banks
    Economics and Business Letters 10(3), 240-248, 2021 Basel violations, volatility model variants and value at risk: Optimization of performance deviations in banks Shahid Anjum1,* • Naveeda Qaseem2 1Universiti Teknologi Brunei, Brunei Darussalam 2University of Westminster, United Kingdom Received: 26 October 2020 Revised: 12 February 2021 Accepted: 14 February 2021 Abstract Basel penalties originate from VaR violations, based on the type of variance models and the estimation of VaR thresholds employed by the bank, which leads to excess regulatory capital charge held and can affect banks’ risk taking ability and thus its profitability. Different approaches are used to evaluate the performance of variance models and forecasts of the VaR thresholds i.e. hypothesis tests, back-testing procedures and Basel Accord regulatory calculations for penalty zones. A multi-criteria performance measure named as analytical hierarchy process has been introduced in this study. The approach helps in evaluating and selection of the optimal internal model based on performance evaluation techniques objectively which, in turn, may help in the selection of best volatility internal model for reduction in the VaR violations and thus leaving more capital in the hands of the banks for profitable ventures. Keywords: volatility models; value at risk estimation and performance measures; multivariate techniques; multi-criteria decision making JEL Classification Codes: G21, K23, M16, N20, O16 1. Introduction Volatility is the focus of market risk measurement, management and reporting exercises. Measuring the variance or volatility of the market portfolio is at the heart of measuring VaR threshold for the capital charge of market risk (Bhandari; 2010; Dardac et el.; 2011 & Mehta et el.; 2012) and thus banks’ compliance with Basel regulations (Anjum; 2018).
    [Show full text]
  • A Guide to Presenting Clinical Prediction Models for Use in the Clinical Setting Laura J Bonnett, Kym IE Snell, Gary S Collins, Richard D Riley
    A guide to presenting clinical prediction models for use in the clinical setting Laura J Bonnett, Kym IE Snell, Gary S Collins, Richard D Riley Addresses & positions Laura J Bonnett – University of Liverpool (Tenure Track Fellow); [email protected]; Department of Biostatistics, Waterhouse Building, Block F, 1-5 Brownlow Street, University of Liverpool, Liverpool, L69 3GL; https://orcid.org/0000-0002-6981-9212 Kym IE Snell – Centre for Prognosis Research, Research Institute for Primary Care and Health Sciences, Keele University (Research Fellow in Biostatistics); [email protected]; https://orcid.org/0000-0001- 9373-6591 Gary S Collins – Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford (Professor of Medical Statistics); [email protected]; https://orcid.org/0000-0002-2772-2316 Richard D Riley – Centre for Prognosis Research, Research Institute for Primary Care and Health Sciences, Keele University (Professor of Biostatistics); [email protected]; https://orcid.org/0000-0001-8699-0735 Corresponding Author Correspondence to: [email protected] Standfirst Clinical prediction models estimate the risk of existing disease or future outcome for an individual, conditional on their values of multiple predictors such as age, sex and biomarkers. In this article, Bonnett and colleagues provide a guide to presenting clinical prediction models so that they can be implemented in practice, if appropriate. They describe how to create four presentation formats, and discuss the advantages and disadvantages of each. A key message is the need for stakeholder engagement to determine the best presentation option in relation to the clinical context of use and the intended user.
    [Show full text]
  • Point-In-Time Versus Through-The-Cycle Ratings 1
    Point-in-Time versus Through-the-Cycle Ratings 1 Authors: Scott D. Aguais, Lawrence R. Forest, Jr., Elaine Y. L. Wong, Diana Diaz-Ledezma 2 1 The authors would like to acknowledge the many Basel and credit risk related discussions they have had with various members of the Barclays Risk Management Team over the last year. The authors also want to thank, Tim Thompson, Julian Shaw and Brian Ranson for helpful comments. Moodys|KMV also deserves thanks for providing an historical data set of five-year KMV EDF term structures for all of their counterparties. Finally, we thank Zoran Stanisavljevic, for his un-ending credit data management support for our credit research and modelling efforts. All errors remain the responsibility of the authors. The views and opinions expressed in this chapter are those of the authors and do not necessarily reflect the views and opinions of Barclays Bank PLC. 2 Scott Aguais is Director and Head of Credit Risk Methodology. Lawrence Forest and Elaine Wong are Associate Directors, and Diana Diaz-Ledezma is a Manager, all on the Credit Risk Methodology Team. All four work at Barclays Capital, the Investment Banking unit of Barclays. Draft – For Discussion Purposes Only I. Introduction: To qualify for the advanced internal ratings-based (AIRB) status under the Basel II Capital Accord, “internationally active banks” (“banks”) must document the validity not only of their probability of default (PD) estimates but also of the Loss Given Default (LGD) and Exposure at Default (EAD) measures that they plan to use as alternatives to the foundation internal ratings-based (FIRB) assumptions.
    [Show full text]
  • Systematic Risk of Cdos and CDO Arbitrage
    Systematic risk of CDOs and CDO arbitrage Alfred Hamerle (University of Regensburg) Thilo Liebig (Deutsche Bundesbank) Hans-Jochen Schropp (University of Regensburg) Discussion Paper Series 2: Banking and Financial Studies No 13/2009 Discussion Papers represent the authors’ personal opinions and do not necessarily reflect the views of the Deutsche Bundesbank or its staff. Editorial Board: Heinz Herrmann Thilo Liebig Karl-Heinz Tödter Deutsche Bundesbank, Wilhelm-Epstein-Strasse 14, 60431 Frankfurt am Main, Postfach 10 06 02, 60006 Frankfurt am Main Tel +49 69 9566-0 Telex within Germany 41227, telex from abroad 414431 Please address all orders in writing to: Deutsche Bundesbank, Press and Public Relations Division, at the above address or via fax +49 69 9566-3077 Internet http://www.bundesbank.de Reproduction permitted only if source is stated. ISBN 978-3–86558–570–7 (Printversion) ISBN 978-3–86558–571–4 (Internetversion) Abstract “Arbitrage CDOs” have recorded an explosive growth during the years before the outbreak of the financial crisis. In the present paper we discuss potential sources of such arbitrage opportunities, in particular arbitrage gains due to mispricing. For this purpose we examine the risk profiles of Collateralized Debt Obligations (CDOs) in some detail. The analyses reveal significant differences in the risk profile between CDO tranches and corporate bonds, in particular concerning the considerably increased sensitivity to systematic risks. This has far- reaching consequences for risk management, pricing and regulatory capital requirements. A simple analytical valuation model based on the CAPM and the single-factor Merton model is used in order to keep the model framework simple.
    [Show full text]
  • Hong Leong Finance Chooses Kamakura's Suite of Solutions For
    Press Contacts: 2222 Kalakaua Avenue, 14th Floor Martin Zorn Honolulu, Hawaii 96815, USA President and COO Telephone 808 791 9888 1-808-791-9888, extension 8700 Fax 808 791 9898 [email protected] www.kamakuraco.com www.kamakuraco.com www.kris-online.com For Immediate Release Hong Leong Finance Singapore Chooses Kamakura’s Suite of Solutions for Asset and Liabilities Management, Funds Transfer Pricing, and Liquidity Management SINGAPORE, Nov 14, 2017 - Hong Leong Finance has signed an agreement to use Kamakura Corporation’s set of solutions for its balance sheet management, funds transfer pricing, and liquidity management processes. The Singapore-listed finance company, part of the Hong Leong Group Singapore, made its decision after doing a comprehensive upgrade exercise involving its risk management processes. “Hong Leong Finance was clear in its risk vision, and wanted to incorporate an integrated framework for balance sheet management, funds transfer pricing and liquidity management initiatives,” said Dr. Clement Ooi, Kamakura’s EVP and Managing Director of Asia-Pacific Operations. “They wanted to load the data once, and reconcile the data once. Kamakura Corporation’s solution fit the bill perfectly, providing the company with an accurate assessment of transaction cashflows that can then be used for gap, duration, and mismatch management, as well providing a crystal-clear understanding of the margins associated with transactions. The company now has an integrated, holistic computation engine—the platform upon which it will build its integrated risk framework. The company will be one of the first to generate both historical and forward-looking views of its balance sheet and gain a transaction-level understanding of profit margins.
    [Show full text]
  • Transforming Summary Statistics from Logistic Regression to the Liability Scale: Application To
    bioRxiv preprint doi: https://doi.org/10.1101/385740; this version posted August 6, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Transforming summary statistics from logistic regression to the liability scale: application to genetic and environmental risk scores Alexandra C. Gilletta, Evangelos Vassosa, Cathryn M. Lewisa, b* a. Social, Genetic & Developmental Psychiatry Centre, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, UK b. Department of Medical and Molecular Genetics, Faculty of Life Sciences and Medicine, King’s College London, London, UK Short Title: Transforming the odds ratio to the liability scale *Corresponding Author Cathryn M. Lewis Institute of Psychiatry, Psychology & Neuroscience Social, Genetic & Developmental Psychiatry Centre King’s College London 16 De Crespigny Park Denmark Hill, London, SE5 8AF, UK Tel: +44(0)20 7848 0873 Fax: +44(0)20 7848 0866 E-mail: [email protected] Keywords Statistical genetics, risk, logistic regression, liability threshold model, schizophrenia Page 1 bioRxiv preprint doi: https://doi.org/10.1101/385740; this version posted August 6, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. 1. Abstract 1.1. Objective Stratified medicine requires models of disease risk incorporating genetic and environmental factors.
    [Show full text]
  • 1. INTRODUCTION the Use of Clinical and Laboratory Data to Detect
    1. INTRODUCTION The use of clinical and laboratory data to detect conditions and predict patient outcomes is a mainstay of medical practice. Classification and prediction are equally important in other fields of course (e.g., meteorology, economics, computer science) and have been subjects of statistical research for a long time. The field is currently receiving more attention in medicine, in part because of biotechnologic advancements that promise accurate non-invasive modes of testing. Technologies include gene expression arrays, protein mass spectrometry and new imaging modalities. These can be used for purposes such as detecting subclinical disease, evaluating the prognosis of patients with disease and predicting their responses to different choices of therapy. Statistical methods have been developed for assessing the accuracy of classifiers in medicine (Zhou, Obuchowski, and McClish 2002; Pepe 2003), although this is an area of statistics that is evolving rapidly. In practice there may be multiple sources of information available to assist in prediction. For example, clinical signs and symptoms of disease may be supplemented by results of laboratory tests. As another example, it is expected that multiple biomarkers will be needed for detecting subclinical cancer with adequate sensitivity and specificity (Pepe et al. 2001). The starting point for the research we describe in this paper is the need to combine multiple predictors together, somehow, in order to predict outcome. We note in Section 2 that for a binary outcome, D, the best classifiers using the predictors Y =(Y1,Y2,...,YP ) are based on the risk score function RS(Y )=P (D =1|Y ) (1) or equivalently any monotone increasing transformation of it.
    [Show full text]
  • Support Vector Hazards Machine: a Counting Process Framework for Learning Risk Scores for Censored Outcomes
    Journal of Machine Learning Research 17 (2016) 1-37 Submitted 1/16; Revised 7/16; Published 8/16 Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes Yuanjia Wang [email protected] Department of Biostatistics Mailman School of Public Health Columbia University New York, NY 10032, USA Tianle Chen [email protected] Biogen 300 Binney Street Cambridge, MA 02142, USA Donglin Zeng [email protected] Department of Biostatistics University of North Carolina at Chapel Hill Chapel Hill, NC 27599, USA Editor: Karsten Borgwardt Abstract Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non- linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood.
    [Show full text]
  • Developing a Credit Risk Model Using SAS® Amos Taiwo Odeleye, TD Bank
    Paper 3554-2019 Developing a Credit Risk Model Using SAS® Amos Taiwo Odeleye, TD Bank ABSTRACT A credit risk score is an analytical method of modeling the credit riskiness of individual borrowers (prospects and customers). While there are several generic, one-size-might-fit-all risk scores developed by vendors, there are numerous factors increasingly driving the development of in-house risk scores. This presentation introduces the audience to how to develop an in-house risk score using SAS®, reject inference methodology, and machine learning and data science methods. INTRODUCTION According to Frank H. Knight (1921), University of Chicago professor, risk can be thought of as any variability that can be quantified. Generally speaking, there are 4 steps in risk management, namely: Assess: Identify risk Quantify: Measure and estimate risk Manage: Avoid, transfer, mitigate, or eliminate Monitor: Evaluate the process and make necessary adjustment In the financial environment, risk can be classified in several ways. In 2017, I coined what I termed the C-L-O-M-O risk acronym: Credit Risk, e.g., default, recovery, collections, fraud Liquidity Risk, e.g., funding, refinancing, trading Operational Risk, e.g., bad actor, system or process interruption, natural disaster Market Risk, e.g., interest rate, foreign exchange, equity, commodity price Other Risk not already classified above e.g., Model risk, Reputational risk, Legal risk In the pursuit of international financial stability and cooperation, there are financial regulators at the global
    [Show full text]
  • A Better Coefficient of Determination for Genetic Profile Analysis
    A better coefficient of determination for genetic profile analysis Sang Hong Lee,1,* Michael E Goddard,2,3 Naomi R Wray,1,4 Peter M Visscher1,4,5 This is the peer reviewed version of the following article: Lee, S. H., M. E. Goddard, N. R. Wray and P. M. Visscher (2012). "A better coefficient of determination for genetic profile analysis." Genetic Epidemiology 36(3): 214-224, which has been published in final form at 10.1002/gepi.21614. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for self-archiving. 1Queensland Institute of Medical Research, 300 Herston Rd, Herston, QLD 4006, Australia; 2Biosciences Research Division, Department of Primary Industries, Victoria, Australia; 3Department of Agriculture and Food Systems, University of Melbourne, Melbourne, Australia; 4Queensland Brain Institute, University of Queensland, Brisbane, QLD 4072, Australia; 5Diamantina Institute, University of Queensland, Brisbane, QLD 4072, Australia Running head: A better coefficient of determination *Correspondence: Sang Hong Lee, 1Queensland Institute of Medical Research, 300 Herston Rd, Herston, QLD 4006, Australia. Tel: 61 7 3362 0168, [email protected] ABSTRACT Genome-wide association studies have facilitated the construction of risk predictors for disease from multiple SNP markers. The ability of such ‘genetic profiles’ to predict outcome is usually quantified in an independent data set. Coefficients of determination (R2) have been a useful measure to quantify the goodness-of-fit of the genetic profile. Various pseudo R2 measures for binary responses have been proposed. However, there is no standard or consensus measure because the concept of residual variance is not easily defined on the observed probability scale.
    [Show full text]
  • Risk Management & Trading Conference
    RiskMathics, aware that the most important factors to develop and consolidate the Financial WHO SHOULD ATTEND? Markets are training and promoting a high level financial culture, will host for the third time in The Risk Management & Trading Conference is aim at Practitioners directly or indirectly involved Mexico: “The Risk Management & Trading Conference”, which will have the participation of in areas of trading, risk management, regulation, technology, and research & development of leading authorities who have key roles in the global financial industry. Stock Exchanges, Brokers, Brokerage Houses, Banks, Institutional Investors (Pension Funds, OBJECTIVES Mutual Funds, Insurance Companies, etc.), Hedge Funds, and Independent Investors. One of the primary objectives of this Conference is to provide through Workshops, Presentations It will be of particular relevance to: and Round Table Discussions the latest advances in Risk Management, Trading, Technology • Chief Executive Officers of financial institutions and intermediaries and Market Regulation, and to transmit all this knowledge by local and international authorities • Traders in the field. • Risk Managers Some other objectives of this Conference are to explain and show in detail the current • Consultants situation and where the Global Financial Industry is heading, advances in Pricing, and how • Regulators intermediaries and direct or indirect participants of markets need to be prepared to remain • Technology managers and staff competitive in spite of the new challenges and paradigms
    [Show full text]
  • Conference Overview
    Conference Overview Attend the First International Model Risk Management Conference and hear from thought leaders and today’s leading practitioners working in the areas of model development, validation, and audit or using models and their outputs. Sessions include discussions of risk topics and challenges across a broad spectrum of industries. The Conference will also support the development of professionalism and best practices among Model Risk Management (MRM) practitioners working in the regulated financial industry across sectors and geographies. Through presentations and discussions, the Conference aims to strengthen industry and regulatory MRM practice, grow and develop the current and next generation of MRM practitioners and facilitate the incorporation of best practices across industries. Tailored to address critical issues arising from global financial services regulation as well as emerging issues, the Conference will provide a thorough understanding of regulatory requirements and their impact on the leading MRM practices. The Conference will achieve these outcomes by providing relevant, meaningful and useful content that can be taken away and put into practice. Audience participation and collaboration is encouraged. This two day Conference runs from 8:00 a.m. till late afternoon each day including lunch as well as morning and afternoon breaks. Opening remarks by Jamey Hubbs, Assistant Superintendent Deposit-Taking Supervision Sector, OSFI. Sponsors NETWORKING RECEPTION SPONSOR Scotiabank is Canada’s international bank and a leading financial services provider in North America, Latin America, the Caribbean and Central America, and parts of Asia. Scotiabank is helping our 21 million customers become better off through a broad range of advice, products and services, including personal and commercial banking, wealth management and private banking, corporate and investment banking.
    [Show full text]