Explainable AI: Driving Business Value Through Greater Understanding

Explainable AI: Driving Business Value Through Greater Understanding

pwc.co.uk/xai Explainable AI Driving business value through greater understanding #IntelligentDigital Artificial intelligence (AI) is a transformational $15 trillion opportunity. Yet, as AI becomes more sophisticated, more and more decision making is being performed by an algorithmic ‘black box’. To have confidence in the outcomes, cement stakeholder trust and ultimately capitalise on the opportunities, it may be necessary to know the rationale of how the algorithm arrived at its recommendation or decision – ‘Explainable AI’. Yet opening up the black box is difficult and may not always be essential. So, when should you lift the lid, and how? Contents 2 Introduction: The $15 trillion question: Can you trust your AI? 6 Use case criticality: Gauging the need for explanation 10 Key considerations for Explainability: How to embark on explainability 17 Business benefits: Turning explainability into a competitive differentiator 21 Conclusion: XAI is only going to get more important 23 Appendix 26 Contacts Introduction The $15 trillion question: Can you trust your AI? AI is growing in sophistication, complexity and autonomy. This opens up transformational opportunities for business and society. At the same time, it makes explainability ever more critical. AI has entered the business mainstream, This leads to an interesting question – The executive view of AI on trust opening up opportunities to boost does AI need to be explainable (or at productivity, innovation and least understandable) before it can fundamentally transform operating become truly mainstream, and if it does, models. As AI grows in sophistication, what does explainability mean? 67% complexity and autonomy, it opens up transformational opportunities for In this Whitepaper we look at of the businesses leaders business and society. More than 70% of explainability for the fastest growing branch of real-world AI, that of Machine taking part in PwC’s the executives taking part in a 2017 PwC study believe that AI will impact Learning. What becomes clear is that the 2017 Global CEO Survey every facet of business. Overall, PwC criticality of the use case drives the estimates that AI will drive global desire, and therefore the need, for believe that AI and gross domestic product (GDP) gains of explainability. For example, the majority automation will impact $15.7 trillion by 2030. of users of recommender systems will trust the outcome without feeling the negatively on As businesses adoption of AI becomes need to lift the lid of the black box. This mainstream, stakeholders are is because the underlying approach to stakeholder trust levels increasingly asking what does AI producing recommendations is easy to in their industry in the mean for me, how can we harness the understand – ‘you might like this if you potential and what are the risks? watched that’ and the impact of a wrong next five years. Cutting across these considerations is recommendation is low (a few £ spent Source: PwC 20th Annual CEO Survey, the question of trust and how to earn on a bad film or a wasted 30 minutes 2017 trust from a diverse group of watching a programme on catch up). stakeholders – customers, employees, However as the complexity and impact regulators and wider society. There increases, that implicit trust quickly have been a number of AI winters over diminishes. How many people would the last 30 years which have trust an AI algorithm giving a diagnosis predominantly been caused by an rather than a doctor without having inability of technology to deliver against some form of clarity over how the the hype. However with technology now algorithm came up with the conclusion? living up to the promise, the question Although the AI diagnosis may be more may be whether we face another AI accurate, a lack of explainability may winter due to technologists’ focus on lead to a lack of trust. Over time, this building ever more powerful tools acceptance may come from general without thinking about how to earn the adoption of such technology leading to a trust of our wider society. pool of evidence that the technology was better than a human, but until that is the case, algorithmic explainability is more than likely required. 2 | Explainable AI | PwC Emerging frontier Exhibit 1 | Classifying AI algorithms The emerging frontier of AI is machine learning (ML). For the purposes of this paper, we define Machine Learning’ as a class of learning algorithms AI exemplified by Artificial Neural Networks, Decision Trees, Support Vector Machines, etc.: algorithms that can learn from examples (instances) Rule based Non rule based Supervised learning and can improve their performance with more data over time. Through machine learning, a variety of ML Unsupervised learning ‘unstructured’ data forms including images, spoken language, and the internet (human and corporate ‘digital exhaust’) are being used to inform Reinforcement learning medical diagnoses, create recommender systems, make investment decisions and Source: PwC help driverless cars see stop signs We primarily focus on machine learning, a particular class of AI algorithm, because: i) ML is the responsible for the majority Operating in the dark In conversations with clients, we often of recent advances and renewed The central challenge is that many of the refer to this approach (perhaps interest in AI, and AI applications using ML operate within unfairly!) as ‘machine learning as a black boxes, offering little if any Kaggle competition’, referencing the ii) ML is a statistical approach to AI that popular website1 where teams compete by its very nature can be difficult to discernible insight into how they reach to build the most accurate machine interpret and validate. their outcomes. For relatively benign, high volume, decision making learning models. In our view, this is a applications such as an online retail one dimensional vision of machine recommender system, an opaque, yet learning applications, where the biggest, accurate algorithm is the commercially latest, most complex methods vie for optimal approach. This is echoed across supremacy on the basis of a simple the majority of current enterprise AI mathematical metric. which is primarily concerned with But what if the computer says ‘No’? The showing adverts, products, social absurdity of inexplicable black box media posts and search results to the decision making is lampooned in the right people at the right time. The famous (in the UK at least) ‘Computer ‘why’ doesn’t matter, as long as revenue says No’ sketch2. It is funny for a number is optimised. of reasons, not least that a computer This has driven an approach where should hold such sway over such an accuracy, above all else, has been the important decision and not in any way main objective in machine learning be held to account. There is no way of applications. The dominant users and knowing if it’s an error or a reasonable researchers (often in different parts of decision. Whilst we have become the same large technology firms) have accustomed to (non-AI) algorithmic been concerned with the development of decisions being made about us, despite ever more powerful models to optimise the potential for unfairness, the use of current profits, and pave the way for AI for ‘big ticket’ risk decisions in the future revenue streams such as self- finance sector, diagnostic decisions in driving cars. healthcare and safety critical systems in autonomous vehicles have brought this issue into sharp relief. With so much at stake, decision taking AI needs to be able to explain itself. 1 https://www.kaggle.com/ 2 https://en.wikipedia.org/wiki/Little_Britain PwC | Explainable AI | 3 Building trust A particular source of concern is the use If capitalising on the $15 trillion AI of models that exhibit unintentional opportunity depends on understanding demographic bias. The use of and trust, what are the key priorities? explainable models is one way of checking for bias and decision making Explainable AI (or ‘XAI’) is a machine that doesn’t violate ethical norms or learning application that is interpretable business strategy. enough that it affords humans a degree of qualitative, functional understanding, Organisations have a duty to ensure or what has been called ‘human style they design AI that works and is robust. interpretations’. This understanding can Adapting AI systems to fall in line with a be global allowing the user to responsible technology approach will be understand how the input features (the an ongoing challenge. PwC is helping term used in the ML community for organisations consider the ethics, ‘variables’) affect the model’s output morality, and societal implications of AI with regard to the whole population of through Responsible AI (PwC 2017). training examples. Or it can be local in which case it explains a specific decision. Benefits of interpretability There are significant business benefits Explainable AI looks at why a decision of building interpretability into AI was made so AI models can be more systems. As well as helping address interpretable for human users and pressures such as regulation, and adopt enable them to understand why the good practices around accountability system arrived at a specific decision or and ethics, there are significant benefits performed a specific action. XAI helps to be gained from being on the front foot bring transparency to AI, potentially and investing in explainability today. making it possible to open up the black These include building trust – using box and reveal the full decision making explainable AI systems provides greater process in a way that is easily visibility over unknown vulnerabilities comprehensible to humans. and flaws and can assure stakeholders Different groups have varying that the system is operating as desired. perspectives and demands on the level XAI can also help to improve of interpretability required for AI. performance – understanding why and Executives are responsible for how your model works enables you to deciding the minimum set of fine tune and optimise the model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    28 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us