Deep Learning Exotic Derivatives
Total Page:16
File Type:pdf, Size:1020Kb
UPTEC F 21002 Examensarbete 30 hp Januari 2021 Deep learning exotic derivatives Gunnlaugur Geirsson Abstract Deep learning exotic derivatives Gunnlaugur Geirsson Teknisk- naturvetenskaplig fakultet UTH-enheten Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research Besöksadress: proposes the use of deep learning to approximate such valuation models for highly Ångströmlaboratoriet Lägerhyddsvägen 1 exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep Hus 4, Plan 0 learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large Postadress: datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning Box 536 751 21 Uppsala the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to Telefon: be recalibrated interday using transfer learning. Automatic differentiation 018 – 471 30 03 approximates sensitivity to (normalised) underlying asset prices with a mean relative Telefax: error generally below 1.6%. Overall error when predicting sensitivity to implied 018 – 471 30 00 volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not Hemsida: successful at capturing sensitivity to interday contract change in value, though errors http://www.teknat.uu.se/student of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts. Handledare: Fredrik Lindman Ämnesgranskare: Thomas Schön Examinator: Tomas Nyberg ISSN: 1401-5757, UPTEC F 21002 Popul¨arvetenskaplig sammanfattning Finansiella derivat ¨arkontrakt vars v¨arde beror p˚an˚agonannan finansiell produkt, till exempel priset p˚aen aktie. Aktieoptioner ¨aren mycket vanlig form av derivat och ger ¨agarentill optionen r¨attighetenmen inte f¨orpliktelsenatt k¨opaeller s¨aljaen en aktie till ett f¨orbest¨amt pris och tidpunkt. Derivat s˚asomoptioner handlas flitigt p˚ab¨orseroch andra marknadsplatser v¨arldenom f¨oratt spekulera p˚aupp- och nedg˚angar i pris p˚a olika tillg˚angarsamt f¨ors¨akrasig mot risker i dessa. V¨ardetoch d¨armedpriset p˚aett derivat beror p˚aett os¨akert framtida utfall. Givet vissa antaganden kan detta v¨ardemodelleras matematiskt till mycket goda resultat, d¨ar till¨ampningav stokastisk analys har varit v¨aldigtframg˚angsrik.De resulterande v¨arder- ingsmodellerna kan dock s¨allanl¨osasanalytiskt, med s˚akallade Monte Carlo-metoder ett vanligt l¨osningsalternativ. Dessa l¨oserf¨orderivatets v¨ardegenom att simulera m¨ojliga framtida scenarion och r¨aknav¨antev¨ardet av alla utfall, vilket helst g¨orsmed flera mil- joner simuleringar och kr¨aver d¨armedb˚ademycket datorkraft och tid. Banker och andra investerare vill sedan ut¨over v¨ardetveta hur det f¨orh˚allersig till ¨andringar i den un- derliggande produkten. Om priset p˚aen aktie stiger, hur mycket ¨andrasv¨ardetp˚a aktieoptionen? F¨oratt ber¨aknadessa k¨ansligheterkr¨avsytterliggare simuleringar, och leder antingen till v¨aldigtl˚angsammal¨osningareller stora approximationsfel. En l¨osningtill detta problem ¨aratt approximera Monte-Carlo v¨arderingmed deep learning. Deep learning modeller kan l¨araandra modellers beteende genom att tr¨anap˚a existerande data. Genom att replikera hur neuroner i en hj¨arnapropagerar information kan de l¨arasig mycket komplexa samband. Dessa samband tar tid att l¨ara,men v¨al tr¨anadekan deep learning modeller utf¨orasamma v¨arderingsom Monte Carlo p˚aen br˚akdelav tiden. Programmeringsbibliotek f¨ordeep learning har ¨aven funktionalitet f¨orautomatisk derivering, som r¨aknarmodellk¨ansligheterautomatiskt vid v¨arderingoch snabbar d¨armedupp ber¨akningarnaytterligare. Det ¨ar¨aven m¨ojligtatt tr¨anaom en deep learning modell fr˚anett samband till ett annat snabbare ¨anom den b¨orjarfr˚an b¨orjan,om sambanden har liknande egenskaper. Detta arbete unders¨oker hur v¨alMonte Carlo v¨arderingav ett v¨aldigtkomplicerat derivat, s˚akallade Phoenix Autocalls, kan approximeras med deep learning. Deep learn- ing approximationen av v¨arderingenutv¨arderasgenom att j¨amf¨oramed Monte Carlo modellen, och liknande f¨or k¨ansligheterber¨aknademed automatisk derivering. F¨oratt effektivisera tr¨aningsprocessen f¨oresl˚as¨aven en specifik parametrisering med de flesta derivat-specifika parametrar samt v¨arderingsdatumenl˚asta. Deep learning modeller tr¨anasd¨armedf¨orspecifika kontrakt och marknadssituationer, och bara p˚atv˚adatum i taget, \idag" och \imorgon". Resultaten visar att deep learning kan approximera Monte Carlo v¨arderingav Phoenix Autocalls mycket v¨al,med ett genomsnittligt fel p˚arunt 0:1%. Det ger ¨aven mycket bra approximationer av modellk¨ansligheternatill pris p˚aunderliggande produkter, med bara ca 1:6% fel. K¨anslighettill variansen i pris ger dock upphov till st¨orrefel p˚a10 ´ 40% som beh¨over vidare studier. Hur v¨ardetp˚aen Phoenix Autoall beror p˚a¨andringenfr˚an en dag till en annan visar sig ¨aven vara sv˚aratt l¨aramed deep learning, med fel p˚aca 8 ´ 25% i b¨astafall. Den f¨oreslagnadatumparametriseringen anses d¨arf¨orej l¨onsam.Att tr¨anaspecifika deep learning modeller per kontrakt och marknadssituation f¨oratt sedan 2 kalibrerar om dessa efter behov visar sig dock vara mycket effektivt. Tidigare tr¨anade modeller l¨arsig nya datum b˚adesnabbare och till ett l¨agreapproximationsfel utifr˚an mindre f¨orsedddata ¨annya modeller, med stora m¨ojlighetertill f¨orb¨attringar. 3 Acknowledgements I would like to thank my supervisor, Fredrik Lindman, for his valuable guidance and feedback throughout the project, as well as help with any difficulties that I encountered. My sincere appreciation also goes to Martin Almqvist and Marcus Silfver for their expertise on derivative valuation models, and for help with using Front Arena PRIME. Finally, I would like to thank all the members of the Model Development team at Handelsbanken Capital Markets for their support and for giving me the possibility to do this project. 4 Contents 1 Introduction 7 1.1 Background . .7 1.2 Machine learning for derivative pricing . .8 1.3 Phoenix Autocalls . .9 1.4 Research questions . .9 1.5 Delimitations . 10 2 Derivative pricing and valuation theory 11 2.1 The Black-Scholes model . 11 2.2 Monte-Carlo pricing path-dependent derivatives . 12 2.3 Pricing Phoenix Autocalls . 14 2.3.1 Contract parameters . 15 2.3.2 Sensitivities . 16 3 Machine learning 19 3.1 Machine learning algorithms . 19 3.2 Training, validating and testing . 19 3.3 Artificial neural networks . 20 3.4 Deep learning . 22 3.4.1 Backpropagation . 23 3.4.2 Model evaluation . 26 3.4.3 Hyperparameter optimisation . 26 3.4.4 Transfer learning . 27 3.5 Development libraries . 27 4 Related work 29 5 Methodology 31 5.1 Model and feature assumptions . 31 5.1.1 Markets and assets . 31 5.1.2 Phoenix Autocall . 32 5.2 Final contract parameters . 32 5.3 Generating training and validation data . 32 5.3.1 Low accuracy training data . 32 5.3.2 High accuracy test and validation data . 34 5.3.3 Data preprocessing . 35 5.4 Model training and validation . 37 5.4.1 Activation function . 37 5.4.2 Volatility intervals . 37 5.4.3 Retraining for new dates . 38 5.5 Model evaluation . 38 5 6 Results 39 6.1 Activation function selection . 39 6.2 Results of training on different volatility subsets . 40 6.2.1 Price prediction error and volatility . 43 6.2.2 Sensitivity results . 45 6.2.3 Sweeping volatility . 49 6.2.4 Restricting input to reduce boundary effects . 52 6.3 Results of retraining original models . 54 6.4 Time performance . 58 7 Analysis 59 7.1 Evaluating activation functions . 59 7.2 Comparing volatility intervals . 59 7.3 Automatic differentiation for sensitivites . 61 7.4 Retraining by transfer learning . 62 7.5 Deep learning speedup . 63 8 Conclusions 64 8.1 Research summary . 64 8.2 Suggestions for future work . 66 8.3 Concluding remarks . 66 9 Bibliography 68 A Appendices: Tables of all trained model losses 71 A.1 Activation functions . 71 A.2 Volatility subsets . 75 A.3 Retrained models . 94 A.4 Software used . 114 6 1 Introduction This section introduces quantitative finance, derivative pricing, and the practical issues arising from Monte Carlo-based valuation models. Machine learning methods are pro- posed as a possible solution to these issues, and a specific case of Phoenix Autocall valuation is presented, forming the basis for the project goals and research questions. The other sections in this report are organised as follows: Theoretical groundwork is laid for derivative pricing and machine learning in section 2 and section 3 respectively. Relevant previous work in applying machine learning methods to derivative pricing is discussed in section 4. Section 5 details the research methodology, including all steps taken and assumptions made. The results are presented in section 6 and subsequently reflected on in section 7. Finally, section 8 summarises the results, draws conclusions to answer the research questions and presents suggestions for future work. 1.1 Background Quantitative finance makes great demands on both speed and accuracy. Tasked with mathematically evaluating the dynamics of financial assets, it requires modelling sources of inherent value, a problem which in general proves exceptionally difficult[1][2][3]. Fortu- nately, some assets define these sources more explictly.