<<

pwc.co.uk/xai

Explainable AI Driving business value through greater understanding

#IntelligentDigital Artificial intelligence (AI) is a transformational $15 trillion opportunity. Yet, as AI becomes more sophisticated, more and more decision making is being performed by an algorithmic ‘black box’. To have confidence in the outcomes, cement stakeholder trust and ultimately capitalise on the opportunities, it may be necessary to know the rationale of how the algorithm arrived at its recommendation or decision – ‘Explainable AI’. Yet opening up the black box is difficult and may not always be essential. So, when should you lift the lid, and how? Contents

2 Introduction: The $15 trillion question: Can you trust your AI?

6 Use case criticality: Gauging the need for explanation

10 Key considerations for Explainability: How to embark on explainability

17 Business benefits: Turning explainability into a competitive differentiator

21 Conclusion: XAI is only going to get more important

23 Appendix

26 Contacts Introduction The $15 trillion question: Can you trust your AI?

AI is growing in sophistication, complexity and autonomy. This opens up transformational opportunities for business and society. At the same time, it makes explainability ever more critical. AI has entered the business mainstream, This leads to an interesting question – The executive view of AI on trust opening up opportunities to boost does AI need to be explainable (or at productivity, innovation and least understandable) before it can fundamentally transform operating become truly mainstream, and if it does, models. As AI grows in sophistication, what does explainability mean? 67% complexity and autonomy, it opens up transformational opportunities for In this Whitepaper we look at of the businesses leaders business and society. More than 70% of explainability for the fastest growing branch of real-world AI, that of Machine taking part in PwC’s the executives taking part in a 2017 PwC study believe that AI will impact Learning. What becomes clear is that the 2017 Global CEO Survey every facet of business. Overall, PwC criticality of the use case drives the estimates that AI will drive global desire, and therefore the need, for believe that AI and gross domestic product (GDP) gains of explainability. For example, the majority automation will impact $15.7 trillion by 2030. of users of recommender systems will trust the outcome without feeling the negatively on As businesses adoption of AI becomes need to lift the lid of the black box. This mainstream, stakeholders are is because the underlying approach to stakeholder trust levels increasingly asking what does AI producing recommendations is easy to in their industry in the mean for me, how can we harness the understand – ‘you might like this if you potential and what are the risks? watched that’ and the impact of a wrong next five years. Cutting across these considerations is recommendation is low (a few £ spent Source: PwC 20th Annual CEO Survey, the question of trust and how to earn on a bad film or a wasted 30 minutes 2017 trust from a diverse group of watching a programme on catch up). stakeholders – customers, employees, However as the complexity and impact regulators and wider society. There increases, that implicit trust quickly have been a number of AI winters over diminishes. How many people would the last 30 years which have trust an AI algorithm giving a diagnosis predominantly been caused by an rather than a doctor without having inability of technology to deliver against some form of clarity over how the the hype. However with technology now algorithm came up with the conclusion? living up to the promise, the question Although the AI diagnosis may be more may be whether we face another AI accurate, a lack of explainability may winter due to technologists’ focus on lead to a lack of trust. Over time, this building ever more powerful tools acceptance may come from general without thinking about how to earn the adoption of such technology leading to a trust of our wider society. pool of evidence that the technology was better than a human, but until that is the case, algorithmic explainability is more than likely required.

2 | Explainable AI | PwC Emerging frontier Exhibit 1 | Classifying AI algorithms The emerging frontier of AI is machine learning (ML). For the purposes of this paper, we define Machine Learning’ as a class of learning algorithms AI exemplified by Artificial Neural Networks, Decision Trees, Support Vector Machines, etc.: algorithms that can learn from examples (instances) Rule based Non rule based Supervised learning and can improve their performance with more data over time. Through machine learning, a variety of ML Unsupervised learning ‘unstructured’ data forms including images, spoken language, and the internet (human and corporate ‘digital exhaust’) are being used to inform Reinforcement learning medical diagnoses, create recommender systems, make investment decisions and Source: PwC help driverless cars see stop signs We primarily focus on machine learning, a particular class of AI algorithm, because: i) ML is the responsible for the majority Operating in the dark In conversations with clients, we often of recent advances and renewed The central challenge is that many of the refer to this approach (perhaps interest in AI, and AI applications using ML operate within unfairly!) as ‘machine learning as a black boxes, offering little if any competition’, referencing the ii) ML is a statistical approach to AI that popular website1 where teams compete by its very nature can be difficult to discernible insight into how they reach to build the most accurate machine interpret and validate. their outcomes. For relatively benign, high volume, decision making learning models. In our view, this is a applications such as an online retail one dimensional vision of machine recommender system, an opaque, yet learning applications, where the biggest, accurate algorithm is the commercially latest, most complex methods vie for optimal approach. This is echoed across supremacy on the basis of a simple the majority of current enterprise AI mathematical metric. which is primarily concerned with But what if the computer says ‘No’? The showing adverts, products, social absurdity of inexplicable black box media posts and search results to the decision making is lampooned in the right people at the right time. The famous (in the UK at least) ‘Computer ‘why’ doesn’t matter, as long as revenue says No’ sketch2. It is funny for a number is optimised. of reasons, not least that a computer This has driven an approach where should hold such sway over such an accuracy, above all else, has been the important decision and not in any way main objective in machine learning be held to account. There is no way of applications. The dominant users and knowing if it’s an error or a reasonable researchers (often in different parts of decision. Whilst we have become the same large technology firms) have accustomed to (non-AI) algorithmic been concerned with the development of decisions being made about us, despite ever more powerful models to optimise the potential for unfairness, the use of current profits, and pave the way for AI for ‘big ticket’ risk decisions in the future revenue streams such as self- finance sector, diagnostic decisions in driving cars. healthcare and safety critical systems in autonomous vehicles have brought this issue into sharp relief. With so much at stake, decision taking AI needs to be able to explain itself.

1 https://www.kaggle.com/ 2 https://en.wikipedia.org/wiki/Little_Britain

PwC | Explainable AI | 3 Building trust A particular source of concern is the use If capitalising on the $15 trillion AI of models that exhibit unintentional opportunity depends on understanding demographic bias. The use of and trust, what are the key priorities? explainable models is one way of checking for bias and decision making Explainable AI (or ‘XAI’) is a machine that doesn’t violate ethical norms or learning application that is interpretable business strategy. enough that it affords humans a degree of qualitative, functional understanding, Organisations have a duty to ensure or what has been called ‘human style they design AI that works and is robust. interpretations’. This understanding can Adapting AI systems to fall in line with a be global allowing the user to responsible technology approach will be understand how the input features (the an ongoing challenge. PwC is helping term used in the ML community for organisations consider the ethics, ‘variables’) affect the model’s output morality, and societal implications of AI with regard to the whole population of through Responsible AI (PwC 2017). training examples. Or it can be local in which case it explains a specific decision. Benefits of interpretability There are significant business benefits Explainable AI looks at why a decision of building interpretability into AI was made so AI models can be more systems. As well as helping address interpretable for human users and pressures such as regulation, and adopt enable them to understand why the good practices around accountability system arrived at a specific decision or and ethics, there are significant benefits performed a specific action. XAI helps to be gained from being on the front foot bring transparency to AI, potentially and investing in explainability today. making it possible to open up the black These include building trust – using box and reveal the full decision making explainable AI systems provides greater process in a way that is easily visibility over unknown vulnerabilities comprehensible to humans. and flaws and can assure stakeholders Different groups have varying that the system is operating as desired. perspectives and demands on the level XAI can also help to improve of interpretability required for AI. performance – understanding why and Executives are responsible for how your model works enables you to deciding the minimum set of fine tune and optimise the model. How assurances that need to be in place to could better insights into business establish best practices and will drivers such as revenue, cost, customer demand an appropriate ‘shield’ against behaviour and employee turnover, unintended consequences and extracted from decision making AI, reputational damage. Management improve your strategy? Further benefits require interpretability to gain comfort include enhanced control – and build confidence that they should understanding more about system deploy the system. Developers will behaviour provides greater visibility therefore need AI systems to be over unknown vulnerabilities and flaws. explainable to get approval to move into The ability to rapidly identify and production. Users (staff and consumers) correct mistakes in low criticality want confidence that the AI system is situations add up if applied across all accurately making (or informing) the intelligent automaton. right decisions. Society wants to know that the system is operating in line with Use case criticality basic ethical principles in areas such as How far does your business need to go? the avoidance of manipulation and bias. When AI is used to target consumers Organisations are facing growing through advertising, make investment pressure from customers and regulators decisions or drive cars, the required to make sure their AI technology aligns levels of interpretability will clearly with ethical norms, and operates within vary. We believe that there are three key publicly acceptable boundaries. factors to consider when determining where interpretability is required and to what level (see Exhibit 2).

4 | Explainable AI | PwC Exhibit 2 | Gauging the required level of interpretability

The type of The type of AI interpretability The type of use The way forward on The type of AI used is There are two The degree to which an interpretability the major determining dimensions to organisation is required In this paper, we outline why factor in what level of interpretability. to be able to explain interpretability is now vital to interpretability is Transparency helps how the system works capitalising on AI, the key feasible and which shed light on black box depends on the nature considerations for judging how techniques can be models, whilst of the use case. There explainable your AI model must be, and deployed. The major explainability helps are six key domains for the business benefits from making distinction is between organisations to evaluating the criticality explainability a priority. PwC’s use case rules and non-rules rationalise and of use cases determined criticality framework helps address risks based systems. Non- understand AI decision by three factors: associated with a given use case and our rules based models making: assessment recommends optimal include ML algorithms • Use case outcomes for interpretability, validation which fall into three • Explainability ‘why considerations; and verification, rigour and risk broad categories did it do that?’ • Enterprise management (including bespoke (see Exhibit 1). • Transparency ‘how considerations; controls and governance structure) does it work?’ • Environmental tailored to your organisation. As with considerations. Responsible AI, the objective of XAI isn’t to stifle or slow down innovation, but Source: PwC rather to accelerate it by giving your business the assurance and platform for The more critical the use case, the more interpretability will be required. However, execution you need to capitalise on the the need to get inside the black box may limit the scope of the AI system – how can full potential of AI. you balance trade-offs in areas such as increasing accuracy and performance while improving interpretability?

A thorough assessment of the utility and risk of a use case informs a set of recommendations around interpretability and risk management, which helps drive executive decision making to optimise performance and return on investment.

PwC | Explainable AI | 5 Use case criticality: Gauging the need for explanation

The importance of explainability doesn’t just depend on the degree of functional opacity caused by the complexity of your machine learning models, but also the impact of the decisions they make.

Not all AI is built equal, so it’s important For each AI use case, the verification to think about why and when and validation process may require explanations are useful. Requiring every different elements of interpretability, AI system to explain every decision depending on the level of rigour could result in less efficient systems, required. Beyond the risk, it’s important forced design choices, and a bias to consider commercial sensitivities. towards explainable, but less capable Opening the AI black box would make and versatile outcomes. it comprehensible for users, but it could also give away valuable intellectual property.

Exhibit 3 | The need for Explainability

Working as intended? How sensitive is the impact? Are you comfortable with A primary driver for model The need for interpretability also the level of control? interpretability is the need to depends on the impact. For an AI The other key piece in the jigsaw is understand how a given model system for targeted advertising, for the level of autonomy. Does the AI makes predictions, while ensuring example, a relatively low level of system make decisions and perform that it does so according to the interpretability could suffice, as the actions consequently, or do its outputs desired specifications and consequences of it going wrong are function as mere recommendations to requirements demanded of it. negligible. On the other hand, the human users who can then decide interpretability for an AI based whether or not to follow these? diagnosis system would be Explainable factors can be used for a significantly higher. Any errors could level of rules based control or to flag not only harm the patient, but also to humans automatically. It’s deter adoption of such systems. important to be able to fully understand a system before allowing it to make business decisions without human intervention.

Use case criticality All these various considerations come together to determine whether explainability is necessary and the level of rigour that would need to be applied.

Source: PwC

6 | Explainable AI | PwC Exhibit 4 outlines the main use case Exhibit 4 | Use case criticality components criticality evaluation criteria across six domains. In practice, the criticality of use case explainability is driven Revenue The total of the economic impact of a single prediction, the predominantly by three economic utility of understanding why a single prediction was economic factors: made, and the intelligence derived from a global understanding 1. The potential economic impact of a of the process being modeled. single prediction;

2. The economic utility of Rate The number of decisions that an AI application has to make e.g. understanding why the prediction two billion per day versus three per month. was made with respect to the choice of actions that may be taken as the result of the prediction;

3. The economic utility of the Rigour The robustness for the application: its accuracy and ability to information gleaned from generalise well to unseen data. understanding trends and patterns across multiple predictions (management information).

However, organisations must place a Regulation The regulation determining the acceptable use and level of higher value on the importance of functional validation needed for a given AI application. factors beyond the economic and technical drivers such as executive risk, reputation, and rigour.

Reputation How the AI application interacts with the business, stakeholders, and society and the extent a given use case could impact business reputation.

Risk The potential harm due to an adverse outcome resulting from the use of the algorithm that goes beyond the immediate consequences and includes the organisational environment: executive, operational, technology, societal (including customers), ethical, and workforce.

Source: PwC PwC | Explainable AI | 7 Exhibit 5 outlines PwC’s approach to Exhibit 5 | Use case criticality assessing the criticality of an AI use case. Explainable AI works in conjunction with PwC’s overarching Responsible AI framework for the best practice of AI: Responsible AI, which helps Strategy Design Develop Operate organisations deliver on AI in a responsible manner. Upon evaluation of the six main criteria of use case criticality, the framework will Use case criticality criteria recommend for new use cases the optimal set of recommendations at each step of the Responsible AI journey to Revenue Regulation inform Explainable AI best practice for Rate Reputation a given use case. Whilst for existing AI implementations, the outcome of the Rigour Risk assessment is a gap analysis showing an organisation’s ability to explain model predictions with the required level of detail compared to PwC Explainable AI leading practice and the readiness of an organisation to deliver on AI The interpretability feasibility and The level of ongoing risk management (PwC Responsible AI, 2017). optimal techniques including appropriate controls and The gap analysis provides a view that governance The required level of validation informs the trade-off between prediction and verification The ways you can reduce your accuracy and explainability. If an risk profile organisation is ahead of the required level for the ability to explain, this would suggest the organisation has room to trade-off some level of additional explainability for increased model Outcomes accuracy. In the same context, a scenario where the ability to explain falls below Ability to explain Organisational readiness the required level would result in a need to reduce the model prediction accuracy Low High Low High in order to achieve greater explainability. Gap Gap

Required level Current level

Source: PwC

8 | Explainable AI | PwC PwC | Explainable AI | 9 Key considerations for explainability: How to embark on explainability

Given a compelling rationale for incorporating explainability into an application, how do you choose the appropriate machine learning algorithm, explanation technique, and method for presenting the explanation to a human?

Explainable AI makes it possible to open Trade-off between up the black box and reveal the aspects performance and of the decision making process that provide a meaningful explanation to explainability humans. This however comes with the Explainable systems usually incorporate need for additional software some sort of model interpreter. We can components and application design think of interpretation as a method for considerations. mapping of a concept (e.g. ‘cat’) to input features that a human can make sense of Explainable by design (e.g. group of pixels representing As with most engineering processes, you whiskers). The explanation is the must consider the capabilities your collection of interpretable features that system requires at the early stages of the contribute to the decision (e.g. whiskers design phase. Explainability is no + tail + collar => cat). Thus, different – it needs to be considered up explainability is linked directly to model front and embedded into the design of interpretability – the degree to which the AI application. It affects the choice of the interpreter can assign interpretable machine learning algorithm and may features to a model prediction. impact the way you choose to pre-process data. Often this comes down Interpretability is a characteristic of a to a series of design trade-offs. model that is generally considered to come at a cost. As a rule of thumb, the more complex the model, the more accurate it is, but the less interpretable it is. See Exhibit 6: Relative explainability of learning algorithms. Complexity is primarily driven by the class of machine learning algorithm used to generate a model (e.g. Deep Neural Network vs Decision Tree) as well as its size (e.g the number of hidden layers in a neural network).

10 | Explainable AI | PwC Some models however, such as Exhibit 6 | Relative explainability of learning algorithms decision trees, are highly amenable to explanation. It is possible to build commercially useful models where the entire decision process can be diagrammatically illustrated. If the model becomes too large for useful graphical representation, the tree structure of the model means that interpreter software can trace clear decision paths through the model and extract the key determinants of a prediction. Neural networks on the other hand, whilst amenable to graph analysis, contain many more connections and have more subtle properties with respect to node Source: DARPA interactions that are inherently difficult to interpret. Exhibit 7 | Feature importances in investment product suitability It’s worth noting that some researchers are challenging this long held view Suitability metre (such as Montavon et al. 2017), who argue that recent advances in Likely to be unusable Likely to be usable interpreting neural networks allow users deeper insights unavailable from Top contributing factors simple models because of their Factors Contribution complexity. This makes intuitive sense 1. Age 15.0 when, as we shall see later, with features 2. Risk profile that comprise of Deep Neural Nets 13.6 (DNNs), explanations can include 3. Annual income 10.5 representations of complex concepts as 4. Expenditure 9.0 images that can’t be meaningfully represented by a simple linear model. 5. Assets 3.4 6. Liabilities 3.0

7. Expected retirement age 3.0

8. Amount invested 2.1

9. Product sold 2.1

10. Channel 1.9

Source: PwC

Exhibit 8 | importance in explaining image recognition

Source: Tulio Ribeiro et al (2016)

PwC | Explainable AI | 11 Exhibit 9 | Text importance to explain article saliency

Source: PwC, Original article: Bloomberg

Interpretability is a necessary but not automatically generated, the grammar is Type of prediction sufficient condition for explanation. A not perfect, however statements such as Certain types of model prediction are model interpreter may generate these can be aggregated and more easier to explain than others. Binary representation of a decision process, but domain knowledge applied to build classifiers are the easiest (i.e. ‘Is it X or is turning this into a useful explanation can more ever more complex explanations. it Y?’). We can talk about factors that be challenging for the following reasons: push the decision in one way or the Problem domain other. This can be extended to ordinal The limits of explainability Certain types of problem can’t be readily multilabel classifiers (e.g. predicting The type of learning algorithm that understood by quantifying a handful of credit ratings) and regressors (e.g. generates a model is a key factor in factors. For example in the field of predicting store sales) that have a determining explainability. But the type genomics, the prediction is usually natural ‘direction’ in the output of explanation required, and the type of almost entirely a driven by the (magnitude of risk or revenue). This input data used in the model can be combinations’ nucleotides. Here, simple allows straightforward application of equally important. feature importance conveys little by way domain specific knowledge to enrich of an explanation, although richer explanations. In contrast, multi-label Feature importances explanatory representations can be classifiers in domains where there is Most explanations are limited to a list or informative (see Vidovic et al 2016). little or no inherent order (as in many graphical representation of the main image classification problems), are Data preprocessing features that influenced a decision and generally limited to ‘one vs all’ type their relative importance. Exhibits 7, 8 Many ML applications employ explanations. and 9 show typical approaches to significant data pre-processing to explanation presentation. improve accuracy. For example, dimensionality reduction through Presenting feature importances in these principal component analysis (PCA), or ways largely ignores the details of using word vector models on textual interactions between features, so even data can obscure the original human the richest explanations based on this meanings of the data making approach are limited to relatively simple explanations less informative. statements. Domain specific logic can be applied to explanations and presented as Correlated input features text with a natural language generation When highly correlated training data (NLG) approach. PwC has successfully is available, correlates are routinely implemented this method in domains dropped during the feature selection such as corporate credit rating in which process or simply swamped by the rich textual explanations are other correlates with more predictive automatically generated. Here is an power. Hence ambiguity concerning the excerpt from an automatically generated latent factor behind the correlates is credit report: ‘Given the company’s hidden from the explanation, financials, media references linking it potentially leading the user to an with acquisitions is associated with credit incorrect conclusion. strength’. As you can see, being

12 | Explainable AI | PwC What explanation Sensitivity analysis: Human explanations are This is an approach that is applied in often flawed: techniques are available? A number of methods for generating many domains to understand the It’s worth noting that, to some behaviour of not just models, but any degree, human explanations suffer explanations exist, ranging from classic black box analysis approaches that have opaque, complex system, such as from the limits described above. We electrical circuits. Whilst there are a are prone to oversimplification, been used in science and engineering for generations, to the latest methods number of formal approaches that are cognitive biases, and have difficulty designed to give particular statistical explaining abstract concepts unless designed for Deep Neural Networks (DNNs). We cannot list these exhaustively, insights, the simplest approach is to specific language already exists. marginally alter (perturb) a single input Whilst we can easily recognise the but provide a summary of the more popular approaches we see used by the feature and measure the change in difference between many common model output. This gives a local, feature items, we can have difficulty machine learning community, as well as some promising recent developments. specific, linear approximation of the defining the differences (e.g. we can model’s response. By repeating this see a pair of twins are different, but We can broadly group explanation process for many values, a more can’t quite explain why). More techniques in model-agnostic approaches extensive picture of model behaviour worryingly, research by and learning algorithm specific can be built up. This approach is often psychologists such as Gazzaniga approaches. Model-agnostic approaches extended to Partial Dependence Plots 3 (2012) show that human post-hoc are essentially ‘black box’ explainers and (PDP) or Individual Conditional explanations of their own behaviour can in principle be applied to any ML Expectation (ICE) Plots to give global can be grossly inaccurate, with model. They do not need to see what is graphical representation of single subconscious processes creating going on under the hood, just tweak the feature importances. Extending basic consistent narratives to support a inputs and observe the effects. This can sensitivity analysis into higher positive sense of self rather than come at the cost of less explainability dimensions is trivial: multiple rather reporting the ‘facts’. Thus, even the than algorithm specific techniques that than a single features are perturbed to best intentioned human explanations more directly probe a model’s inner build a composite picture of feature can be completely unreliable workings. Nonetheless, these approaches importances. unbeknownst to the explainer. are often very effective and straightforward to implement.

3 Who’s in Charge?: Free Will and the Science of the Brain

PwC | Explainable AI | 13 The benefits of this method are its Shapley Additive applied to individual trees in the forest simplicity and ease of implementation. Explanations – SHAP6 and the feature importances aggregated. In many problem domains, the results Thus, random forests are highly Similarly to LIME, SHAP is a local are very intuitive. It works particularly interpretable models with high accuracy surrogate model approach to well for simple models with smoothly that can be understood both globally establishing feature importance. varying behaviour and well separated and locally. This makes random forests It uses the game theoretic concept of features. However, its simplicity means the ‘go to’ algorithm in many Shapley values to optimally assign that it can be applied to complex models commercial applications. A widely used feature importances. The Shapley such as DNNs where it is effective for open source tree interpreter package value of a feature’s importance is its extracting pixel importance in image ‘treeinterpreter’ is available that makes average expected marginal recognition explanations. this approach relatively straightforward contribution after all possible feature to implement7, although significant combinations have been considered. It’s important to note that sensitivity domain expertise may be required for analysis gives explanations for the The Shapley value guarantees to multi-label classification problems. variation in the model output rather perfectly distribute the marginal effect than the absolute value. Usually this of a given feature across the feature provides sufficient explanation: the Neural Network values of the instance. Thus SHAP question ‘what makes this image more Interpreters currently produces the best possible cat-like?’ is essentially indistinguishable Neural networks, particularly DNNs, are feature importance type explanation from ‘what makes this image a cat?’. characterised by their complexity and possible with a model agnostic consequently are widely considered This approach has several drawbacks: it approach. This however comes at a cost: difficult to interpret. In our opinion, the doesn’t directly capture interactions the computational requirements of issue is more one of a lack of amongst features, and simple sensitivity exploring all possible feature generalisability. There is no inherent measures can be too approximate. This combinations grow exponentially with barrier to gaining insight into the inner can be potentially problematic for the number of input features. For the workings of neural networks, but it often discontinuous features such as vast majority of problems, this makes needs to be tackled on a case by case categorical information and one hot complete implementation of this basis requiring significantly more encoding frequently used in natural approach impractical and expertise and effort than say the language processing. approximations must suffice. relatively trivial exercise of extracting The methods outlined above can be global feature importances from a Local Interpretable Model applied to any class of model, however, random forest. – Agnostic Explanations – richer and more accurate explanations This effort is often rewarded with 4 are often available with learning LIME sometimes surprising and informative algorithm specific interpreters. LIME addresses the main shortcomings insights into how the DNN decomposes a of basic sensitivity analysis. Like problem into hierarchies (e.g. fine texture sensitivity analysis, it can be applied to Tree interpreters vs. gross structure in image classifiers), any model, but unlike sensitivity analysis As discussed earlier, decision trees are a manifolds (simplified representations of it captures feature interactions. It does so highly interpretable class of model, more complex concepts) and class label by performing various multi-feature albeit one of the least accurate. The ‘prototypes’ (idealised representations of perturbations around a particular Random Forest algorithm is an target classes). prediction and measuring the results. extension of the basic decision tree It then fits a surrogate (linear) model to algorithm which can achieve high these results from which it gets feature accuracy. It is an ensemble method that importances, capturing local feature trains many similar variations of interactions. It can also handle non- decision trees and makes decisions continuous input features frequently based on the majority vote of individual found in machine learning applications. trees. A decision tree interpreter can be An open source implementation of this approach is available5 which currently makes this the ‘go to’ interpreter for many practitioners.

4 Tulio Riberio et al. (2016) 5 http://github.com/marcotcr/lime 6 Lundberg and Lee (2017) 7 http://blog.datadive.net/random-forest-interpretation-with-scikit-learn/ 14 | Explainable AI | PwC Activation maximisation and works back through the network, Model evaluation: going (AM) assigning relevance of inputs from the beyond statistical preceding layers (when fed-forward) Is a method that can be used to find a until it reaches the input layer. Whilst measures DNNs prototypical representation of a there are many variations on this Machine Learning model evaluation is particular concept. For example, in an approach, they tend to be implemented critical to validate that systems meet the image recognition DNN, AM could be on a bespoke basis. However, an open intended purpose and functional used to produce a visual representation source implementation called DeepLIFT8 requirements. The de facto approach of the DNNs concept of a cat. By is available for those who would rather amongst ML practitioners has been to searching for input patterns that not build an interpreter from scratch. test models on a held-out portion of the maximise a particular output (e.g. training data and report error. Graphical the probability the image is a cat), a In the last section, the trade-offs, limits analysis of confusion matrices, ROC prototypical cat image can be extracted. and methods for explainable AI were curves and learning curves can further Thus, AM can afford direct interpretable discussed. In the next section we extend enhance the tester’s understanding of insight into the internal representations the discussion to the implications for the model’s behaviour. of DNNs. model evaluation – this is key to the safe and effective use of AI. In our opinion, Beyond these quantitative approaches, a Whilst sensitivity analysis is commonly even slight improvements in the model functional understanding of ML model used to explain feature importance in evaluation process can pay dividends in behaviour using XAI can give critical DNNs, a neural network specific future model performance and reducing insight not available through quantitative approach called Relevance Propagation the risk of adverse model behaviour. validation approaches. An example is a can produce more accurate, robust study carried out in the nineties using explanations. It can be thought of the rules based learning and neural networks inverse of sensitivity analysis in that the to decide which pneumonia cases should algorithm starts at the model output, be admitted to hospital or treated at home. The models were trained on patients’ recovery in historical cases9.

8 Shrikumar et al (2017). See also: https://github.com/kundajelab/deeplift 9 Caruana, et al. (2015)

PwC | Explainable AI | 15 Both models predicted patient recovery Continuous evaluation In cases where engineers are charged with high accuracy with the neural Unlike traditional software and hand with making someone else’s model network found to be the most accurate. crafted top down models, machine explainable, as may be the case with a Both models predicted that pneumonia learning models may be periodically commercially available off the shelf patients with asthma shouldn’t be retrained or continuously updated model, the model’s users should take admitted because they had a lower risk (online learning) as they learn from new into account the degree of transparency of dying. instances. Factoring explainable AI that comes with the model. Selecting an outputs into automated controls means optimal approach to XAI is significantly In fact, pneumonia patients were at such more straightforward when the details high risk, they were routinely admitted robust qualitative rules based safeguards against unexpected, of the model are well known rather than directly to the intensive care unit, treated having to deal with a mystery black box. aggressively, and as a consequence had a unwanted, and known weakness in high survival rate. Because the rules model behaviour can be applied. For based model was interpretable, it was example, if a single pixel in an image is possible to see that the model had learnt identified as the most important feature ‘if the patient has asthma, they are at in a decision, the model could be the lower risk’. Despite ostensibly good target of an adversarial attack by quantitative performance metrics, the criminals. Applying the rule ‘if a single counterintuitive explanation invalidated pixel is the primary explanatory feature the model from a clinical point of view. Had by a large margin, then raise alert’ could this model been deployed in production, it safeguard against such attacks. Whether could have led to unnecessary patient risk the reason is an adversarial attack or and possibly death. not, XAI has alerted us to problem, whatever the nature, through the Thus a functional explanation, even if a application of common sense rules. gross approximation of the underlying model complexity, can catch the type of 2.12 Model transparency potentially dangerous informational Model transparency is concerned with shortcuts machine learning algorithms are conveying to a user the structural details good at finding in contravention of the of the model, statistical and other developers intentions. descriptive properties of the training Whilst the previous example is unlikely to data, and evaluation metrics from which occur these days, there are many likely behaviour can be inferred. examples of badly trained algorithms Irrespective of the need for explainable using data such as people’s names to infer AI or not, this is basic information demographic characteristics, using URLs without which it is impossible to make in mined text to classify documents and informed use of a machine learning using copyright tags to classify images. algorithm. These models may perform well in testing, but are liable to catastrophic failure or unintended consequences in production. This type of oversight can be avoided with even a fairly rudimentary deployment of XAI.

9 Caruana, et al. (2015)

16 | Explainable AI | PwC Business benefits: Turning explainability into a competitive differentiator

How XAI can not only strengthen stakeholder confidence, but also improve performance, make better use of AI and stimulate further development.

The greater the confidence in the AI, the faster and more widely it can be deployed. Your business will also be in a stronger position to foster innovation and move ahead of your competitors in developing and adopting next generation capabilities.

Exhibit 10 | Eight business benefits of Explainable AI

Optimise Retain Maintain Comply Model performance Control Trust Accountability Decision making Safety Ethics Regulation

Source: PwC

PwC | Explainable AI | 17 Optimise

Model performance Decision making Exhibit 11 | Reinforcement learning: One of the keys to maximising performance As discussed earlier, the primary use of AlphaGo Zero is understanding the potential weaknesses. machine learning applications in business is The better the understanding of what the automated decision making. However, often When using explainable AI systems, we models are doing and why they sometimes we want to use models primarily for can try to extract this distilled fail, the easier it is to improve them. analytical insights. For example, you could knowledge from the AI system in order DeepMind was able to improve and optimise train a model to predict store sales across a to acquire new insights. One example of AlphaGo (the machine that famously beat large retail chain using data on location, such knowledge transfer from AI system the world’s best Go player) through being opening hours, weather, time of year, to human arose when DeepMind’s AI able to see and understand how and why the products carried, outlet size etc. The model model AlphaGo identified new strategies to play Go, which certainly now have system was making particular decisions. It would allow you to predict sales across my also been adapted by professional would not be possible to fully optimise stores on any given day of the year in a human players. AlphaGo and create the success we know variety of weather conditions. However, by today if the system was functioning as a building an explainable model, it’s possible What is remarkable about AlphaGo is black box. to see what the main drivers of sales are and that the model didn’t require any human input at all. It wasn’t given any rules; use this information to boost revenues. instead it mastered the game of Go by Explainability is a powerful tool for detecting playing alone and against itself. flaws in the model and biases in the data A popular use of machine learning is So how does it work? There are three which builds trust for all users. It can help predicting customer churn. A 90% accurate main components: verifying predictions, for improving models, prediction that Joe Bloggs (your customer) and for gaining new insights into the will switch to a competitor in the next month 1. The Policy Network (trained on high level games to imitate players); problem at hand. Detecting biases in the is interesting. But so what? Perhaps you model or the dataset is easier when you could offer a discount if you want to retain 2. The Value Network (evaluate the understand what the model is doing and why the customer, but what if they are switching board position and calculate the it arrives at its predictions. because they had a bad customer service probability of winning in a given experience and aren’t particularly price position); sensitive in this instance. You may 3. The Tree Search (looks through unnecessarily offer them (and many others) different variations of the game to ineffective and expensive incentives without try and figure out what will happen realising you are losing customers for a in the future). reason that has nothing to do with price. If First the Policy Network scans the the churn model could make a 90% accurate position and comes up with the prediction with a reason ‘the customer will interesting spots to play and builds up a switch to a competitor because they spent tree of variations. It then deploys the 47 minutes waiting for customer service to Value Net to tell how promising the outcome of this particular variation is answer the phone over the past 12 months,’ with the goal of maximising the then fixing the root cause (customer service) probability of winning. is by far the better strategy. The extraordinary progress of this form of reinforcement learning offers immense potential to support human decision making. Instead of playing Go or Chess, an AI model in the right environment can ‘play’ at corporate strategy, consumer retention, or designing a new product.

Source: Mastering the Game of Go without Human Knowledge, Silver et al. 2017

18 | Explainable AI | PwC Retain Maintain Comply

Control Trust Accountability To move from proof of concept to Building trust in artificial intelligence means It’s important to be clear who is accountable fully-fledged implementation, you need to be providing proof to a wide array of for an AI system’s decisions. This in turn confident that your system satisfies certain stakeholders that the algorithms are making demands a clear XAI-enabled understanding intended requirements, and that it does not the correct decisions for the right reasons. of how the system operates, how it makes have any unwanted behaviours. If the system Explainable algorithms can provide this up decisions or recommendations, and how it makes a mistake, organisations need to be to a point, but even with state of the art learns and evolves over time and how to able to identify that something is going machine learning evaluation methods and ensure it functions as intended. wrong in order to take corrective action or highly interpretable model architectures, the even to shut down the AI system. context problem persists: AI is trained on To assign responsibility for an adverse event historical datasets which reflect certain caused by AI, a chain of causality from the AI XAI can help your organisation retain control implicit assumptions about the way the agent back to the person or organisation over AI by monitoring performance, flagging world works. Events can occur that radically needs to be established that can be errors and providing a mechanism to turn the reframe problems (an earthquake, a new reasonably held responsible for its actions. system off. From a data privacy point of view, central bank policy, a new technology) and Depending on the nature of the adverse XAI can help to ensure only permitted data is make the historical training data invalid. By event, responsibility will sit with different being used, for an agreed purpose, and make gaining an intuitive understanding of a actors within the causal chain that lead to it possible to delete data if required. model’s behaviour, the individuals the problem. It could be the person who responsible for the model can spot when the made the decision to deploy the AI for a task Developers frequently try to solve problems model is likely to fail and take the to which it was ill suited, or it could rest with by ‘throwing data’ at AI in instances of a appropriate action. the original software developers who failed black box system. Having visibility over the to build in sufficient safety controls. data and features AI models are using to XAI also helps to build trust by strengthening provide an output can ensure that issues the stability, predictability and repeatability arise can be understood and a level of control of interpretable models. When stakeholders Regulation can be maintained. For systems that learn see a stable set of results, this helps to While AI is lightly regulated at present, this through customer interactions, interpretable strengthen confidence over time. Once that is likely to change as its impact on everyday AI systems can shed light on any adverse faith has been established, end users will lives becomes more pervasive. Regulatory training drift. find it easier to trust other applications that bodies and standard institutions are focusing they may not have seen. This is especially on a number of AI-related areas, with the establishment of standards for governance, Safety important in the development of AI, as models are likely to be deployed in settings accuracy, transparency and explainability There have been several concerns around where their use may alter the environment, being high on the agenda. Further regulatory safety and security of AI systems, especially as possibly invalidating future predictions. priorities include safeguarding potentially they become more powerful and widespread. vulnerable consumers. This can be traced back to a range of factors including deliberate unethical design, Ethics It’s important for industry bodies and engineering oversights, hacking and the effect In software development life cycles, regulators to bring regulation into line of the environment the AI operates in. engineers and developers are focused on with developments in AI and mirroring the functional requirements while business AI ecosystem within their sectors. This of XAI can help to identify these kinds of faults. teams are focused on speeding up AI course means that many of these bodies It’s also important to work closely with cyber implementation and boosting performance. will need to acquire the requisite expertise detection and protection teams to guard The ethical impact and other unintended to effectively discharge this duty. There is against hacking and deliberate manipulation consequences can easily be obscured by the an important role for professional and of learning and reward systems. need to meet these pressing objectives. academic bodies representing the technology community (for example ACM It’s important that a moral compass is built and IEEE) to play as technical experts and into the AI training from the outset and AI help advise policy. behaviour is closely monitored thereafter through XAI evaluation. Where appropriate, However, the current risks from AI come a formal mechanism that aligns a company’s from its deployment in specific industrial technology design and development with its contexts, and current regulatory systems are ethical values and principles and risk appetite generally far better placed to monitor, and may be necessary. sanction their constituents. PwC is actively advising regulators, for instance the PwC is working towards embedding ethical Financial Conduct Authority (FCA), on the considerations into the design phase of the impact of current developments of the field AI development cycle, with clear governance of AI including the creation of safe ‘sandbox’ and controls to guard against risks emerging environments for live testing. Working from ethical oversight. Accountability must closely with the British Standards Institution be shared between business managers and (BSI) and the International Organisation for solution owners with a view of Standardisation (ISO), PwC is identifying understanding and mitigating risks early on. requirements for specific areas of AI/ML to help develop a new set of standards for AI.

PwC | Explainable AI | 19 Exhibit 12 | GDPR for Machine Learning

The EU’s General Data Protection Regulation (GDPR) is likely to create a host of new and complex obligations between data subjects and models, particularly around machine learning (ML). This section attempts to address some of these questions:

1. Does the GDPR prohibit machine learning? Technically, The GDPR contains a prohibition on the use of automated decision-making without human intervention. However, this prohibition is circumvented where processing is contractually necessary, authorised by another law, or the data subject has explicitly consented.

2. Is there a ‘right to explainability’ from ML? Much of the predictive power of ML lies in complexity that’s difficult, if not impossible, to explain. Articles 13-15 set out a right to ‘meaningful information about the logic involved’, and data subjects are entitled not to be subject to (Article 22), to receive an explanation of, and to challenge decisions (Recital 71). While these provisions may establish the need for a detailed explanation of a model’s workings, regulators are more likely to focus on a data subject’s ability to make informed decisions.

3. Do data subjects have the ability to demand that models be retrained without their data? If consent is withdrawn for the data used in an ML model, the model might theoretically have to be retrained on new data. However all processing that occurred before the withdrawal remains legal. If the data was legally used to create a model or prediction, training data can be deleted or modified without affecting the model. It may technically be possible to rediscover original data (see Nicolas Papernot et. al), however this is unlikely in practice, and it is not expected that models will be subject to constant demands of being re-trained on new data.

This considers just some of the complex intersections between ML and the GDPR and these questions remain extremely nuanced. Lawyers and privacy engineers are going to be a central component of data science programs in the future.

Source: Information Commissioner’s Office

20 | Explainable AI | PwC Conclusion XAI is only going to get more important

AI is advancing, deployment is proliferating and the focus of regulators, customers and other stakeholders is increasing in step. XAI can help you keep pace.

We are moving out of the carefree days Key takeaways: of silicon valley giants getting content in front of eyeballs and into a broad, 1. AI must be driven by the industrial revolution where things are business about to get a lot more serious…. Any Developers are mostly focussed on cognitive system allowed to take actions delivering well-defined functional on the back of its predictions had better requirements, and Business Managers be able to explain itself, if even in a on business metrics and regulatory grossly simplified way. compliance. Concerns around algorithmic impact tend only to get Explainability has been largely ignored attention when algorithms fail or have a by the business community and is negative impact on the bottom line. something PwC is helping organisations Because AI software is inherently more to solve. There are no perfect methods, adaptive than traditional decision- and some problems are inherently not making algorithms, problems can unfold semantically understandable, but most with quicker and greater impact. business problems are amenable to some Explainable AI can forge the link degree of explanation that de-risks AI, between non-technical executives and builds trust, and reduces overall developers, allowing the effective enterprise risk. Making XAI a core transmission of top level strategy to competency and part of your approach junior data scientists. Insufficient to AI design and QA will pay dividends governance and quality assurance today and in the future. around this technology is inherently PwC is helping advise and assure our unethical and needs to be addressed at clients by building and helping all levels of the organisation. Without organisations build AI systems that are XAI, governance is very difficult. explainable, transparent, and interpretable and by assessing and 2. Executive accountability assuring organisations have built AI With the proliferation of AI systems and, modes that adhere to our standards. equally, the increased impact on organisations, individuals and society, it needs to be clear who is accountable for an AI system’s decisions. If executives are required to accept accountability for AI, they will need to understand the risk it introduces to their business. Without a deeper understanding of the system’s rationales, executives would introduce unknown risks to their risk profile. In order to accept accountability, executives must have the confidence that a system operates within defined boundaries.

PwC | Explainable AI | 21 3. Doing the right thing, right coming. In this respect, AI explainability In conclusion being able to explain not Humans are the designers of AI systems, is now coming to rank alongside cyber as just how the black box works, but your and ethics are embedded before the very a threat, but also a valuable differentiator approach to designing, implementing first line of code is written. Hence it is if handled smartly. and running the black box will imperative to have defined ethics and increasingly be a key requirement of core values, along with a governance 5. Explainable by design driving trust in AI. This is where the worlds of Responsible AI and Explainable system that ensures compliance, before You might assume that this isn’t an AI collide. However, it is likely that you the development and deployment of an immediate priority as the systems you’re will have two classes of AI, those AI system. These foundations help guide using are still fairly rudimentary and solutions that are already active and your organisation when engaging with humans still have the last word on key those that you will deploy in the future. potential customers, restricting them decisions. But how long will this be the For those active applications you should from unethical misuse. It’s important to case and how can you lay secure look to understand whether your ability ensure that business managers foundations for moving ahead? The to explain the results aligns with the understand, are accountable for the inability to see inside the black box can expectations of the key stakeholders and risks and, where appropriate, a formal only hold up AI development and whether you need to make any changes. mechanism is in place that aligns your adoption. You might assume that the It is important to identify any gap in technology with its ethical policies and necessary layers of understanding and understanding at both an algorithm and risk appetite. control can be applied as your systems a process level. For those applications develop. But that would simply leave you that are either on the drawing board or with the same jumble of bolt-ons that 4. Future-proofing your AI are yet to be identified, you need to look have made most technology The need to be interpretable is increasing. at how you will ensure you have a infrastructures so difficult to manage In sectors such as financial services, the mechanism for understanding the level and optimise. It’s important to begin use of advanced AI is already so well of explainability required for an thinking about interpretability now as entrenched that risks should be at the top algorithm in the idea and design phase AI becomes more prevalent and of the risk register. Other sectors such as for new AI applications. healthcare and transport are fast complex. This is akin to AI safety following suit. And, as AI continues to research, investing the time and effort Explainability is a business issue that permeate through the economy, all now to be prepared for the future. And needs a business and board response, sectors would eventually need to judge by proactively putting in place the right and not just a tech response. The time to the criticality and impact of their AI on measures early, you can future-proof AI act is now – get this right, and you can the one side and how much faith they assurance rather than relying on move forward with greater confidence. have in the outcomes on the other. A new decades of reactive fixes. wave of AI specific regulation is also

22 | Explainable AI | PwC Appendix 1 Classifying machine learning algorithms

Exhibit 1 | Classifying machine learning algorithms

Supervised learning

The most common approach to machine learning where the goal is to train a classifier or regressor by finding function that maps the training examples to training label with minimal error. Supervised learning is used in situations where the training data examples are labelled (e.g. image of a cat with label ‘cat’) and the instances encountered in production expected to be drawn from a similar distribution of instances used in training. There are many applications ranging from image recognition, to spam detection, to stock price prediction.

Unsupervised learning

Training examples are unlabeled, so unsupervised algorithms look for naturally occuring patterns in the data. Historically this was usually clustering which can be used for segmentation tasks like customer segmentation and anomaly detection for financial crime detection. More recently, unsupervised approaches using DNNs (Autoencoders and Generative Adversarial Networks) are being increasingly used for filtering, dimensionality reduction, and in generative applications where models generate artificial examples of training instances such as human faces.

Reinforcement learning (RL)

RL algorithms are software agents that learn policies about how to interact with their environment. They behave in a way most people expect of AI in the sense they that choose actions and ‘do things’ in response to other agents (such as humans). To do this, they rely on a state space representation of their environment. They seek to optimise cumulative reward over time by iteratively choosing actions that result in ‘high value’ states. The value of various environment states are learnt in the training phase where the algorithm explores its environment. Applications in this domain include inventory management and dynamic pricing. DeepMind was able to train an algorithm, AlphaGo, using RL to beat a champion Go player. This technique has also been used to train robots to climb stairs like humans and to improve lane merging software for self-driving cars.

Source: PwC

PwC | Explainable AI | 23 Appendix 2 Subjective scale of explainability of different classes of algorithms and learning techniques

As discussed, explainability refers to the understandability of a given result viewed as post hoc interpretations. Since most of these models do not give direct explanations as to why or how the results are achieved, we have provided subjective values on the scale of 1 to 5 (with 1 being the most difficult and 5 being the easiest) to rate how easy or difficult it is for an end user to decipher why a model made a certain decision. Each of these learning techniques has different structures that are affected by how they learn from new information.

Exhibit 2 | Subjective scale of explainability of different classes of algorithms and learning techniques

AI algorithm Learning Scale of Reasoning/Explanation class technique explainability (1-5)

Graphical Bayesian belief 3.5 BNNs are a statistical model used to describe the conditional models networks (BNNs) dependencies between different random variables. BNNs have a high level of explainability because the probabilities associated with the parent nodes influence the final, making it possible to see how much of a certain feature is used to determine the final outcome. Supervised or Decision trees 4 Decision trees partition data based on the highest information gain, unsupervised where the nodes have the most influence. They are represented as a learning treelike structure where the most important feature is at the top, with other features branching off beneath it in order of relative importance. Out of all the ML learning techniques, decision trees are the most explainable because you can follow the progression of branches to determine the exact factors used in making the final prediction. Supervised or Logistic 3 The most commonly used supervised learning technique, logistic unsupervised regression regression represents how binary or multinomial response variables learning are related to a set of explanatory variables (that is, features). Because an equation is associated with the predictions of this model, we can investigate the influence of each feature on the final prediction. Equations can get messy, so we believe this technique is only moderately explainable.

24 | Explainable AI | PwC AI algorithm Learning Scale of Reasoning/Explanation class technique explainability (1-5)

Supervised or Support vector 2 SVMs are based on the concept of decision planes that define decision unsupervised machines (SVMs) boundaries. SVMs are similar to logistic regression except they learning choose ‘support vectors’ from the training samples that have the most discriminatory power. SVMs are difficult to explain because, while you may know the equation of the kernel and data partition, it is difficult to decipher what features were important in calculating that result. Supervised or K-means 3 K-means clustering is an unsupervised learning technique used to unsupervised clustering group data into ‘K’ clusters of similar features. This technique is learning (unsupervised) moderately explainable because one can view the centre of the clusters as descriptors of what each group represents—although, it is not always clear what certain clusters mean purely based on the centroids (centre point) of the clusters. Deep learning Neural networks 1 Neural networks are the building blocks of all deep learning (NNs) techniques and are becoming more prevalently used in solving ML tasks. NNs are based on the biological structure of the brain in which neurons connect to other neurons through their axon. In the same way, these networks have hidden layers of nodes where information is transferred based on the node’s activation. This algorithm is the least explainable because each hidden node represents a non-linear combination of all the previous nodes. However, Israeli computer science professor Naftali Tishby’s recent theoretical work in this area may help explain why and how they work. Ensemble Random forest/ 3 Random forest techniques operate by constructing a multitude of models boosting decision trees during training, then outputting the prediction that is the average prediction across all the trees. Even though decision trees are pretty explainable, random forest adds another layer of tree aggregation that makes understanding the final result more difficult. Reinforcement Q-learning 2 Q-learning is a technique that learns from positive and negative learning (RL) rewards. It uses these rewards to estimate future returns based on taking a certain action in a certain state. This technique is not very explainable because the only information given with the certain predicted action is the estimated future reward. Users would not be able to understand the agent’s intent because it is looking multiple steps ahead. Natural Hidden Markov 3 HMMs can be represented as the simplest dynamic Bayesian network language models (HMMs) (BBNs that change over time). It is a stochastic model used to model processing randomly changing systems where the future state only depends on (NLP) the current state. Similar to BBN, you attribute certain weight to the previous states based on the probabilities, although the stochastic nature of HMM makes it slightly less explainable.

Source: PwC

PwC | Explainable AI | 25 Contacts

Authors

Chris Oxborough Aldous Birchall Partner Director Emerging and Disruptive Technology Risk Head of Financial Services AI E: [email protected] E: [email protected] T: +44 (0)7711 473199 T: +44 (0)7841 468232

Euan Cameron Andy Townsend Partner Senior Associate UK AI Leader Emerging and Disruptive Technology Risk E: [email protected] E: [email protected] T: +44 (0)7802 438423 T: +44 (0)7446 893328

Dr. Anand Rao Christian Westermann Partner Partner Global AI Lead Data & Analytics Leader E: [email protected] E: [email protected] T: +1 (617) 633 8354 T: +41 (58)792 4400

SME contributors

Laurence Egglesfield Ilana Golbin Director Manager Emerging and Disruptive Technology Risk Artificial Intelligence Accelerator, Data & Analytics E: [email protected] E: [email protected] T: +44 (0)7825 656196 T: +1 (847)912 0250

Ben Cook Trishia Ani Manager Senior Associate Emerging and Disruptive Technology Risk Technology Risk E: [email protected] E: [email protected] T: +44 (0)7446 893328 T: +44 (0)7818 636970

Regional contacts

Ryan Martin William Gee Principal (United States) Partner (China) Digital Risk Solutions Intelligent Automation Risk Assurance Innovation Leader E: [email protected] E: [email protected] T: +1 (617)530 5986 T: +86 (10)6533 2269

Nicola Costello Peter Hargitai Partner (Australia) Partner (Canada) Operational and Technology Risk Management Trust & Transparency Solution Leader E: [email protected] E: [email protected] T: +61 (2)8266 0733 T: +1 (416)941 8464

This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors. © 2018 PricewaterhouseCoopers LLP. All rights reserved. PwC refers to the UK member firm, and may sometimes refer to the PwC network. Each member firm is a separate legal entity. Please see www.pwc.com/structure for further details. 180608-162045-MA-OS