Deep Representation Learning using Stacked Autoencoder for General Insurance Loss Reserving Insurance Data Science Conference ETH Zurich 14th June 2019
Phani Krishna Kandala Satya Sai Mudigonda Pricing Actuary, Swiss Re Senior Tech Actuarial Consultant [email protected] [email protected]
+91 - 9182472136 +91 - 9603573032
1 Agenda
Introduction Key Concepts: Reserving & Neural Networks Motivation Experimental Setup Results Next Steps Q & A
2 Introduction
A novel approach for loss reserving based on blended unsupervised and supervised deep neural networks. INPUT STEP 1 STEP 2
FEATURE 1 UNSUPERVISED LEARNING SUPERVISED LEARNING FEATURE 2
FEATURE 3 Auto Encoders Neural Network Output
Non-linear latent features Use the extracted latent Estimate of Loss or deep representation features from step 1 as a reserves FEATURE N extraction starting point of multilayer perceptron
3 Reserving – Key Concepts
To determine liabilities to be shown in accounts Development Year 0 1 2 3 4 5 6 7
To check adequacy of reserves 2005
2006 To evaluate business performance Year 2007 TRAIN DATA 2008 To measure profitability of business currently being written
2009 Occurring
To calculate loss ratios and combined ratios 2010 PREDICTION
Claims Claims 2011 To perform insurance valuation 2012
4 Neural Network – Key Concepts
INPUT LAYER OUTPUT LAYER • Inputs • Brings together information for the information the neural from the network to hidden layer process • Contains all the • Each circle information represents one OUTPUT LAYER needed for the feature program
INPUT LAYER HIDDEN LAYER 1 HIDDEN LAYER 2
HIDDEN LAYER • These layers do all the processing for neural networks • Higher hidden layers higher accuracy of neural network • Each layer consists of nodes that mimic our brains’ neurons • Nodes receive information from the previous layer’s node • Nodes are multiplied with weights and bias is added to it 5 Motivation of the paper
Without choosing a specific reserving method • Auto Using Current Encoders Ability to calculate • Stacked For any class of business applications of Auto Loss Reserves Encoders Independent of length of the Tail.
6 Auto Encoders - Overview
ENCODING DECODING
푥ො x1 1
h1 푥ො x2 2
푥ො x3 3 h2
h = f(x) x 푥ො4 4 HIDDEN LAYER
INPUT LAYER 푥ො = g(h) OUTPUT LAYER 7 Stacked Auto Encoders - Overview
x1 푥ො1 2 1 1 ℎ1 x1 ℎ1 ℎ1 푥ො 1 2 2 ℎ1 2 ℎ 1 1 ℎ2 1 x1 ℎ2 ℎ2 P(y=0 |x) 푥ො 1 2 2 ℎ2 2 ℎ 1 1 ℎ3 2 x1 ℎ3 ℎ3 P(y=1 |x) 푥ො 1 2 2 ℎ 2 3 ℎ3 1 1 ℎ4 x1 ℎ4 ℎ4 P(y=2 |x) 푥ො1 OUTPUT +1 x1 +1 +1 +1 SOFTMAX 푥ො 1 INPUT CLASSIFIER +1 FEATURES I INPUT (FEATURES II) (FEATURES II) OUTPUT (FEATURES I) INPUT 8 Key Research Contribution from the Past
Neural Network Insights from Inside embedding of the ODP Neutral Networks: reserving model - Switzerland Actuaries Mario Wuthrich1 Association2
1. Gabrielli, A., Richman, R., W¨uthrich, M.V. (2018). Neural network embedding of the overdispersed Poisson reserving model. SSRN Preprint
2. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3226852 9 Experimental Setup
Data: Data has been taken from Bjorn Weindorfer1 paper
1 - https://www.uio.no/studier/emner/matnat/math/STK4540/h18/course-material/chainladder.pdf 10 Loss Reserve Predictions
Three different approaches have been compared with respect to Loss Reserve Predictions
INPUT PREDICTED LOSS RESERVES
Neural Network Result 1
ACCIDENT YEAR
DEVELOPMENT Auto Encoders Neural Network Result 2 YEAR Stacked Auto Neural Network Result 3 Encoders
11 Result 1 Reserve Prediction with Neural Network
True Reserve Predicted Reserve Bias Bias Percentage
17352 27365.64 10013.64 57.71 Reserve Value Reserve
Accident Year 12 Reserve Prediction with Auto-Encoder Result 2 + Neural Network
True Reserve Predicted Reserve Bias Bias Percentage
17352 13750.39 -3601.61 20.75 Reserve Value Reserve
Accident Year 13 Reserve Prediction with Stacked Result 3 Auto-Encoder + Neural Network
True Reserve Predicted Reserve Bias Bias Percentage
17352 14737.26 -2614.74 15.06 Reserve Value Reserve
Accident Year 14 Potential Benefits to Insurer
Fastens the process of prediction by identifying the most important features
Useful to learn the 2 way and 3 way interactions in the claims data. For example: Claim type and cause of claim
Highlights the main reasons for claims reserves increase/ decrease
15 Next Steps
Reduction in bias by optimising the Neural network and Autoencoders.
Working on different LoBs without any embedding function(s)
Working on individual claims data to automatically highlight the predominant features increasing/decreasing the reserves. • Adequate capital allocation. • Suitable Reinsurance product(s) selection. • Efficient Risk Management on insurance risk front.
16 Q & A
• What is novel about this idea? • What functions of insurance company will be able to use this idea? • What are the benefits to insurance companies? • Are you using any proprietary software/packages? • Is this approach used in any other industry? • How this approach can be improved further? • What are the limitations of this approach? • What reserve prediction methods does this idea support? • How this approach is superior to using Neural Networks on stand alone basis?
17 Acknowledgements
• SSSIHL for providing Technology Labs and Actuarial Support • Dr. Pallav Kumar Baruah, Department of Mathematics and Computer Science • Arun Kumar K • Nikhil Rai, M.Tech • Rohan Yashraj Gupta, Actuarial Research Scholar
18