XVA

Rethinking XVA sensitivities Making them universally achievable

With derivatives pricing becoming increasingly complex, a host of new trade valuation adjustments – collectively known as XVAs – have emerged, and regulatory developments have driven demand for calculation of XVA sensitivities. IBM discusses XVA calculation techniques that can accelerate performance and give an advantage over competitors, and the benefits of calculating XVAs using adjoint automatic differentiation over the ‘bump-and-run’ technique XVA SENSITIVITIES

Since the financial crisis, the complexity of derivatives pricing has increased. Banks now need to take into account , the cost of funding of initial (IM) and variation margin, and the regulatory capital associated with a trade. This has resulted in the birth of a range of new derivatives valuation adjustments – collectively known as XVAs. To properly value and hedge these XVAs, banks need to be able to calculate thousands of XVA sensitivities in a timely manner. In addition, the Committee on Banking Supervision’s finalised regulation of the Basel III capital framework, which will allow banks to calculate credit valuation adjustment (CVA) capital and avoid a more punitive regime, is based on the calculation of CVA sensitivities. This makes calculating XVA sensitivities accurately and swiftly one of the main challenges banks face. The long list of adjustments continues to grow, with some more prominent than others: CVA is the difference between the value of the books will move as well. Banks are required at the order of 500 sensitivities or so,” says a risk-free portfolio and the value of that to set aside capital as a result of this risk. Many Matthew Dear, risk software consultant at IBM. portfolio taking into account the likelihood that larger banks run XVA trading desks that examine Finally, the demand for sensitivity calculations the counterparty will default. Debt valuation the profit-and-loss changes in aggregate XVA on was also boosted by new rules that require adjustment (DVA) reflects the of the a daily basis. The XVA desk will also attempt to banks to post IM on non‑centrally cleared writing the contract. actively manage risk by putting on hedge trades trades, which went online last year Funding valuation adjustment (FVA) that reduce the magnitude of XVA fluctuations for the largest internationally active banks. To captures the funding cost of uncollateralised due to market changes. Before putting on any determine IM requirements, banks can use a trades. It reflects the costs of entering into a hedge trades, the XVA desk needs to determine regulatory-prescribed schedule‑based or an deal with a client that is not posting the risk factors to which the XVA is sensitive. approved model-based calculation. The industry’s and then hedging that trade in the interbank To effectively hedge its risk, an XVA desk may standard initial margin model (Simm) is based market, where typically collateral is exchanged require many thousands of sensitivities. on the calculation of weighted risk sensitivities. between counterparties. With the advent of mandatory clearing of Regulatory frameworks ‘Bump and run’ versus the adjoint derivatives and the posting of IM came margin Demand for the calculation of CVA sensitivities is automatic differentiation approach valuation adjustment (MVA). IM is posted on a also being driven by the latest developments in Traditionally, banks have calculated XVA portfolio basis and a gross basis by both sides, the regulatory capital framework. sensitivities using the so-called bump-and-run and is held in a segregated account. MVA The Fundamental Review of the Trading technique – sometimes referred to as the finite reflects the funding cost of that IM. Book – which intended to tweak trading book differences method. Under this approach, an Practices for calculating XVAs vary across capital rules and improve the consistency input risk factor – such as an or a banks. MVA requires the calculation of dynamic of risk‑weighted asset variability across foreign exchange rate – is shifted before the IM – thus being able to project the future IM jurisdictions – was finalised in December 2017 entire batch process is rerun to determine the requirements over the life of the trade. Not even by the Basel Committee when it released its effect on the XVA. the capital valuation adjustment (KVA) – which final revision for post-crisis regulatory capital An XVA desk within a bank may require many reflects the cost of regulatory capital throughout rules. The revised framework also includes a thousands of these sensitivity calculations. This the life of a trade – is as straightforward as it new approach for calculating CVA capital. With means thousands of batch runs, with each run sounds. In addition to divergent measures of this, banks can calculate their requirements typically requiring the trades to be revalued KVA, banks differ on whether they take into using either the basic approach (BA-CVA) or the under thousands of Monte Carlo scenarios and account rules coming down the pipeline that are less punitive sensitivities-based standardised hundreds of time steps. not yet online, but will impact the bank in the approach (SA-CVA). Inputs to the SA-CVA are The computational requirement is further lifetime of long-dated trades written today. the regulatory CVA sensitivities to the market increased by the need to determine forward XVAs can be sensitive to both risk factors and the counterparty credit spreads. IM for the trades. For trades with a central factors and the counterparty’s creditworthiness The number of regulatory CVA sensitivities that counterparty, the IM is determined using a as represented by credit spreads for the a bank needs to calculate depends on the types historical value-at-risk calculation nested within counterparty. If the market moves, the amount of of instruments in their portfolios, but “typically, the main XVA calculation. For non-centrally XVA that needs to be taken as a ‘writedown’ on for the clients we deal with, we are looking cleared trades the forward IM can be determined

1 risk.net April 2018 XVA XVA SENSITIVITIES XVA SENSITIVITIES

using an International Swaps and Derivatives number of outputs. “This was run only on one machine but, Association Simm-type approach. “It can be theoretically shown that the cost obviously, if you had a reasonable number of The problem with the bump-and- of running in reverse mode should be smaller machines, you can calculate a very high number of run approach is that, with thousands of than five times the computational cost of a sensitivities in a very short time period,” says Dear. sensitivities – and the requirement of calculating regular run. Thus, by using AAD, it is possible to “That gives you a huge competitive forward IM for the trades – the whole process calculate all of the possible sensitivities for the advantage because if there’s a major market can be costly and time-consuming. XVA measure at a cost of five times the normal move and everybody is trying to re-hedge, if you “We work with clients who are still using the batch run,” continues Dear. can recalculate all of your sensitivities in less original bump-and-run approach, where they than an hour, you have a distinctive advantage will generate a handful of sensitivities overnight A new solution over your competitors,” he adds. because that’s all the hardware can give them,” Banks have tried to solve the problems related IBM has also written a language called Boxy, says Leo Armer, head of financial risk pre-sales to the bump-and-run approach by using a which allows clients to implement the pricing at IBM. larger number of grids and hardware, with models for the trade before being converted “What is going on in the market could be Tier 1 banks investing resources in alternative into LLVM, which is a compiler infrastructure assessed as a race to see how many sensitivities techniques and new technologies in an attempt framework designed for compile‑time, link-time can be generated overnight in a batch run. The to solve what is essentially a computational and and run-time optimisations of programs. Basel Committee is saying around 500, and performance problem. “The pricing model is then compiled on the the traders we work with are talking about Similarly, many vendors have chosen to fly to the particular chip that you are running thousands, but the current position with the address the challenge by shifting the focus on. What we are planning at the moment is bump-and-run approach is that you might be to graphics processing units (GPUs) and to implement the pricing models in Boxy and able to do a couple of dozen and that’s it. So specialised hardware. However, this has eventually make it user-extensible so that, with the quicker you can get there, the more your caused banks and other financial institutions appropriate training, clients will be able to business will benefit from it,” he adds. to invest in new hardware that is not part of develop their own models and put their own One way to effectively calculate XVA their commodity hardware stack, resulting in pricing models into this framework,” says Dear. sensitivities is to adopt a methodology called additional expenditures. IBM is planning to release the new solution to automatic or algorithmic differentiation – a IBM has taken the approach of looking at a the market in the second quarter of 2018, and is mathematical technique that can improve speed combination of optimisation techniques to gain currently working to increase the product coverage and accuracy compared with the bump-and-run the required performance acceleration without in order to have a standalone product available approach. Automatic differentiation is a set adding extra hardware. Different techniques are later this year that will be introduced into its of techniques that can be used to numerically being applied to the simulation and aggregation existing IBM Algorithmics Integrated Market and evaluate the derivative of a function specified by engines to achieve the necessary performance Credit Risk framework for new and existing clients. a computer program. Automatic differentiation acceleration. These include the use of Thanks to this new solution, the size of the can be run in two modes – forward and reverse. automatic differentiation, dynamic compilation, grid that banks would currently need to calculate “One way of thinking of the XVA sensitivity is vectorisation and extreme cache locality. XVA sensitivities will be substantially reduced. that it is the differential of the XVA with respect “While other vendors rely on GPUs to get From an IT perspective, this would translate to the input risk factor. In forward mode, the some of the performance acceleration they need, into a reduction in the total cost of ownership partial derivatives of all variables with respect to we have dynamically compiled the code, using for the banks and easier management of their the input risk factor are determined through the an open source framework called LLVM that can hardware. Moreover, front-office users will be simulation and aggregation parts of the batch. be compiled down to any hardware platform able to quickly and efficiently re-hedge, bringing These can then be combined using the chain rule that supports it,” says Dear. considerable business benefit as well. to give the overall XVA sensitivity,” explains Dear. “We could compile and run on GPU “Banks may use a smaller grid or they may Reverse mode is sometimes referred to architecture if needed, but we are not restricted just do more sensitivities and understand as adjoint automatic differentiation (AAD). to this technology, we can compile down to any exactly which risk factors their portfolios are In this mode, the dependent variable to be available hardware the client has and run on sensitive to,” says Armer. differentiated – the CVA, for example – is fixed, that,” he adds. “Front offices want as much as information and the partial derivatives are accumulated by IBM has developed a prototype – presented as they can about their sensitivities and not reversing through the code. Reverse mode first at its annual Smarter Risk Summit in London just what the Basel Committee told them to requires a forward pass through the code where in November 2017 – that offers an alternative do. They want more and more insight to where all intermediate variables, and the instructions approach to solving the problems of performance that portfolio could be hedged, and that will that produced them, are stored. acceleration and sensitivity calculation. ultimately make the bank money,” he adds. This method has the potential to reduce The IBM prototype provides a tangible Primarily, this new approach will benefit computational costs by several orders of solution using automatic differentiation and banks trying to calculate what is driving their magnitude. Once the reverse pass has been dynamic compilation without being linked to a P&L. In times of market stress, banks will want completed, the differential of the XVA to all particular hardware platform or having to rely on to carry out a number of stress tests, and the possible input risk factors will be determined. GPUs. The prototype was able to calculate 600 performance acceleration resulting from the AAD therefore works best where there are a XVA sensitivities for 50,000 trades within five to techniques described in this paper will give large number of input risk factors and a small 10 minutes, running on a single laptop. them an advantage over their competitors.

risk.net 2 PRODUCED IN COLLABORATION WITH