
XVA Rethinking XVA sensitivities Making them universally achievable With derivatives pricing becoming increasingly complex, a host of new trade valuation adjustments – collectively known as XVAs – have emerged, and regulatory developments have driven demand for calculation of XVA sensitivities. IBM discusses XVA calculation techniques that can accelerate performance and give banks an advantage over competitors, and the benefits of calculating XVAs using adjoint automatic differentiation over the ‘bump-and-run’ technique XVA SENSITIVITIES Since the financial crisis, the complexity of derivatives pricing has increased. Banks now need to take into account credit risk, the cost of funding of initial margin (IM) and variation margin, and the regulatory capital associated with a trade. This has resulted in the birth of a range of new derivatives valuation adjustments – collectively known as XVAs. To properly value and hedge these XVAs, banks need to be able to calculate thousands of XVA sensitivities in a timely manner. In addition, the Basel Committee on Banking Supervision’s finalised regulation of the Basel III capital framework, which will allow banks to calculate credit valuation adjustment (CVA) capital and avoid a more punitive regime, is based on the calculation of CVA sensitivities. This makes calculating XVA sensitivities accurately and swiftly one of the main challenges banks face. The long list of adjustments continues to grow, with some more prominent than others: CVA is the difference between the value of the books will move as well. Banks are required at the order of 500 sensitivities or so,” says a risk-free portfolio and the value of that to set aside capital as a result of this risk. Many Matthew Dear, risk software consultant at IBM. portfolio taking into account the likelihood that larger banks run XVA trading desks that examine Finally, the demand for sensitivity calculations the counterparty will default. Debt valuation the profit-and-loss changes in aggregate XVA on was also boosted by new rules that require adjustment (DVA) reflects the credit risk of the a daily basis. The XVA desk will also attempt to banks to post IM on non-centrally cleared bank writing the contract. actively manage risk by putting on hedge trades derivative trades, which went online last year Funding valuation adjustment (FVA) that reduce the magnitude of XVA fluctuations for the largest internationally active banks. To captures the funding cost of uncollateralised due to market changes. Before putting on any determine IM requirements, banks can use a trades. It reflects the costs of entering into a hedge trades, the XVA desk needs to determine regulatory-prescribed schedule-based or an deal with a client that is not posting collateral the risk factors to which the XVA is sensitive. approved model-based calculation. The industry’s and then hedging that trade in the interbank To effectively hedge its risk, an XVA desk may standard initial margin model (Simm) is based market, where typically collateral is exchanged require many thousands of sensitivities. on the calculation of weighted risk sensitivities. between counterparties. With the advent of mandatory clearing of Regulatory frameworks ‘Bump and run’ versus the adjoint derivatives and the posting of IM came margin Demand for the calculation of CVA sensitivities is automatic differentiation approach valuation adjustment (MVA). IM is posted on a also being driven by the latest developments in Traditionally, banks have calculated XVA portfolio basis and a gross basis by both sides, the regulatory capital framework. sensitivities using the so-called bump-and-run and is held in a segregated account. MVA The Fundamental Review of the Trading technique – sometimes referred to as the finite reflects the funding cost of that IM. Book – which intended to tweak trading book differences method. Under this approach, an Practices for calculating XVAs vary across capital rules and improve the consistency input risk factor – such as an interest rate or a banks. MVA requires the calculation of dynamic of risk-weighted asset variability across foreign exchange rate – is shifted before the IM – thus being able to project the future IM jurisdictions – was finalised in December 2017 entire batch process is rerun to determine the requirements over the life of the trade. Not even by the Basel Committee when it released its effect on the XVA. the capital valuation adjustment (KVA) – which final revision for post-crisis regulatory capital An XVA desk within a bank may require many reflects the cost of regulatory capital throughout rules. The revised framework also includes a thousands of these sensitivity calculations. This the life of a trade – is as straightforward as it new approach for calculating CVA capital. With means thousands of batch runs, with each run sounds. In addition to divergent measures of this, banks can calculate their requirements typically requiring the trades to be revalued KVA, banks differ on whether they take into using either the basic approach (BA-CVA) or the under thousands of Monte Carlo scenarios and account rules coming down the pipeline that are less punitive sensitivities-based standardised hundreds of time steps. not yet online, but will impact the bank in the approach (SA-CVA). Inputs to the SA-CVA are The computational requirement is further lifetime of long-dated trades written today. the regulatory CVA sensitivities to the market increased by the need to determine forward XVAs can be sensitive to both market risk risk factors and the counterparty credit spreads. IM for the trades. For trades with a central factors and the counterparty’s creditworthiness The number of regulatory CVA sensitivities that counterparty, the IM is determined using a as represented by credit spreads for the a bank needs to calculate depends on the types historical value-at-risk calculation nested within counterparty. If the market moves, the amount of of instruments in their portfolios, but “typically, the main XVA calculation. For non-centrally XVA that needs to be taken as a ‘writedown’ on for the clients we deal with, we are looking cleared trades the forward IM can be determined 1 risk.net April 2018 XVA XVA SENSITIVITIES XVA SENSITIVITIES using an International Swaps and Derivatives number of outputs. “This was run only on one machine but, Association Simm-type approach. “It can be theoretically shown that the cost obviously, if you had a reasonable number of The problem with the bump-and- of running in reverse mode should be smaller machines, you can calculate a very high number of run approach is that, with thousands of than five times the computational cost of a sensitivities in a very short time period,” says Dear. sensitivities – and the requirement of calculating regular run. Thus, by using AAD, it is possible to “That gives you a huge competitive forward IM for the trades – the whole process calculate all of the possible sensitivities for the advantage because if there’s a major market can be costly and time-consuming. XVA measure at a cost of five times the normal move and everybody is trying to re-hedge, if you “We work with clients who are still using the batch run,” continues Dear. can recalculate all of your sensitivities in less original bump-and-run approach, where they than an hour, you have a distinctive advantage will generate a handful of sensitivities overnight A new solution over your competitors,” he adds. because that’s all the hardware can give them,” Banks have tried to solve the problems related IBM has also written a language called Boxy, says Leo Armer, head of financial risk pre-sales to the bump-and-run approach by using a which allows clients to implement the pricing at IBM. larger number of grids and hardware, with models for the trade before being converted “What is going on in the market could be Tier 1 banks investing resources in alternative into LLVM, which is a compiler infrastructure assessed as a race to see how many sensitivities techniques and new technologies in an attempt framework designed for compile-time, link-time can be generated overnight in a batch run. The to solve what is essentially a computational and and run-time optimisations of programs. Basel Committee is saying around 500, and performance problem. “The pricing model is then compiled on the the traders we work with are talking about Similarly, many vendors have chosen to fly to the particular chip that you are running thousands, but the current position with the address the challenge by shifting the focus on. What we are planning at the moment is bump-and-run approach is that you might be to graphics processing units (GPUs) and to implement the pricing models in Boxy and able to do a couple of dozen and that’s it. So specialised hardware. However, this has eventually make it user-extensible so that, with the quicker you can get there, the more your caused banks and other financial institutions appropriate training, clients will be able to business will benefit from it,” he adds. to invest in new hardware that is not part of develop their own models and put their own One way to effectively calculate XVA their commodity hardware stack, resulting in pricing models into this framework,” says Dear. sensitivities is to adopt a methodology called additional expenditures. IBM is planning to release the new solution to automatic or algorithmic differentiation – a IBM has taken the approach of looking at a the market in the second quarter of 2018, and is mathematical technique that can improve speed combination of optimisation techniques to gain currently working to increase the product coverage and accuracy compared with the bump-and-run the required performance acceleration without in order to have a standalone product available approach. Automatic differentiation is a set adding extra hardware.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-