<<

ARTIFICIAL INTELLIGENCE, AND THE BEST OR THE WORST THING EVER TO HAPPEN TO HUMANITY? Contents

1 THE EDGE OF THE PRECIPICE...... 03

2 WHAT IS AI, RPA AND ROBOTICS?...... 04

3 TIME FOR A CONTRACT REFRESH...... 05

02 | , Robotics and Automation THE EDGE OF THE PRECIPICE

As Professor Stephen Hawking said[1], we do not yet fully understand and cannot predict the true impact of AI, and yet the race to business and operational transformation via the implementation of digital , such as artificial intelligence (AI) and robotic process automation (RPA), is on an inexorable rise. And whilst there may be some debate as to the socio-economic impact of the rise of the and whether they will in time decimate the race in a form of fiction disaster movie, for the time being their use is slightly more prosaic. There is no doubt that AI and RPA are here to stay, and businesses, academic institutions and governments are being encouraged to develop their intelligence further, and so it is essential to look to the intelligent future and to both facilitate innovation, allowing businesses to embrace and at the same time mitigate any associated risks. We examine some of the business opportunities and challenges faced, as well as providing our insight on how to manage these issues both in strategic sourcing programmes and in transformative, technology-enabled projects.

[1] http://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of

www.dlapiper.com | 03 WHAT IS AI, RPA AND ROBOTICS?

There is much talk of AI, robotics and RPA, almost on an interchangeable basis. In this paper, these terms are defined as having the following meanings:

Artificial Intelligence – technically a field of Neural networks – an example of science and a phrase coined by John McCarthy in the late ; a neural network is a connected 1950s, AI is the of human intelligence by machines, network of many simple processors, modelled often sub-divided into ‘strong’ and ‘weak’ AI (strong or hard AI on the human brain. is true human mimicry, often the focus of Hollywood, whereas weak or soft AI is more often focussed on a narrow task). – a form of concerned with the human brain’s function and Machine learning – is the Robotic process structure. ability of a machine to automation (RPA) – improve its performance the use of software to Heuristics – a ‘rule of thumb’, more akin to in the future by analysing perform repeatable or gut feeling (as opposed to which will previous results. Machine clerical operations, previously guarantee an outcome), used in AI to problem learning is an application performed by a human. solve quickly. of AI.

One thing it is important to note is that, in spite of the hype surrounding RPA, it won’t do much by itself “out of the box”. It needs to be taught and it will continue to learn before and after deployment, as indicated in the diagram below. This means that the use of RPA comes with an investment cost and a time requirement that is important to bear in mind when seeking to understand when the issues set out here are likely to manifest. It also goes some way to underlining that the use of RPA requires a relatively long-term investment in order to obtain and maintain the full potential benefits.

Training phase “Does this scan indicate cancerous growth?”

nn t estn on ests o testn estn on ntton no t nt t ne t e nto sste ne t

Accuracy 50% 63% 74% Level:

n tne e soo onteo eeene Live use phase te es o te s seen see tosn sns

ne ones ests o otent nes o tosns o sns nn nt o tent esents C sn e ot ee ses o te est t stos nostns

04 | Artificial Intelligence, Robotics and Automation TIME FOR A CONTRACT REFRESH

The potential benefits of implementing AI or RPA within and management of the solution will enable these errors a business can be significant and even transformative for to be recognised, unless the RPA itself can recognise its the commercial well-being of that company – so long as it own errors. is set up to succeed. The use of AI and RPA, particularly in outsourcing deals, can give rise to a number of novel CONFIDENTIALITY AND IP and differently nuanced issues that, if not addressed at the outset, could create some significant issues for the future. Software has been writing software without human intervention for some time. Who owns the resulting new code? Similarly, valuable, derived data from huge raw data SERVICE LEVELS AND FAILURE sets may be sold in much the same way as market data. Broadly speaking, current service level models are Likewise, who owns a new derived data set which has devised to incentivise suppliers to avoid ‘low grade’ issues been created by the machine? that might arise if staff do not follow proper processes. Clearly, any applicable agreement will need to include This is because human beings are by definition fallible terms that deal with the relevant issues. The key here and will be more or less efficient depending upon a large is understanding the likely different outputs that might number of factors. be created as a consequence of the deployment of the This is not the case with AI-based services, which do not AI or RPA technology. It will be important to rethink (or should not) suffer from the same challenges as those the provisions insofar as they relate to matters such as that will likely give rise to human error. Accordingly, configurations, outputs that reflect or are a manifestation of it is not unreasonable to expect improved service levels business rules, and templates generated by the AI or RPA. for processes supported by RPA and in fact, this will There are two particular issues that may require different often be perceived to be one of the key reasons for the treatment – background IP and know-how provisions. implementation of RPA. It is not unusual for customers to agree that The flip side is that if RPA failures occur, there is a far modifications or enhancements to the supplier’s greater risk that the incidents will be catastrophic rather background IP are owned by the supplier (often on the than minor. This is because AI-based systems tend to basis that they are worthless without the underlying work at a demonstrable accuracy level or will, if this product). However, in AI or RPA deployments, this accuracy level cannot be achieved, fail in a significant way category of IP can instead have its own intrinsic value far below the relevant standard. It is far less likely that that the customer ought to consider before letting such systems will degrade by small margins as human- go, because, for example, if used by the supplier or a provided services might. When a defect or error occurs third party it would allow that entity to replicate the its likelihood of repetition and going unseen is increased customer’s business practices (potentially even more beyond that of a human error, as it will likely have been efficiently than the customer) or because it is something “programmed” within the RPA solution and accordingly that the customer will continue to need to own because will become part of the norm. Only continued oversight of the value to the company itself.

www.dlapiper.com | 05 Similarly, most customers will agree a know-how clause, HR, REDUNDANCIES AND KNOWLEDGE permitting the supplier to use the knowledge gained TRANSFER by the supplier in the course of providing the services. Those implementing AI and RPA clearly need to understand But this ought to be reconsidered on the basis that it the HR consequences. Transformational programmes will might aquire knowledge, not of , but of machines need to address process risks such as collective consultation and software opening up the possibility for the supplier requirements, where failure could potentially delay progress to re-use material and knowledge that the customer or give rise to significant financial penalties. Equally, believes to have been protected. potential redundancies will undoubtedly be a sensitive issue, as well as potentially triggering severance payments. AUDIT AND TECHNOLOGY Newly created roles on the back of change may give rise to Customers often ask for audit rights – especially in redeployment and retraining obligations for those displaced. particular sectors such as financial services where a Both remuneration design and representation structures regulated entity is required to ensure appropriate audit will potentially be impacted and come into play. rights and may incur substantial sanctions from its A particular challenge will be understanding the impact regulators if it cannot audit and monitor the work of its of AI and RPA on the workforce sufficiently to identify service providers. legal obligations and not fall foul of timing issues by failing Such monitoring is easier within the traditional sourcing to comply with any obligations in the required timescales, environment, when a supplier can be audited mainly for example collective consultation processes or filing through a review of documents, reports and procedures. notification of redundancies with competent authorities. Any work done by a human can be checked by another Another difficulty where there is a proposed outsourcing human relatively easily. In the new context of AI and and transformation will be understanding whether or not RPA, it is more difficult to work out how the AI system automatic transfer rules apply such as those under is working (and evolves) through the service. TUPE/ARD or similar legislation, and who has the ability to effect redundancies pre or post transfer. This will involve If a machine learning-based system has formulated its asking important questions around exactly when and how own pattern-matching approaches to determine the transformation will impact employees, and navigating the probability of a given action being the correct response legal constraints accordingly. to particular inputs, human auditors will not necessarily be able to derive or follow the underlying logic and reassure themselves in the same way that they might be able to by interviewing workers to check their level of training and competency. It may well be that instead of the traditional accountants and audit professionals, additional forensic IT experts should be added to the team that performs the audit.

06 | Artificial Intelligence, Robotics and Automation Normal transfer-in/transfer-out TUPE model for outsourced services

Customer Cstoes Cstoes o eeent TUPE eoees ses eoees TUPE Supplier es eoees

ee stt te ee en te

AI-based service provision – transfer-in, gradual redundancy

Customer Cstoes t noee oes TUPE eoees te stoe et

Supplier es eoees

ee stt te ee stt te

It is accepted practice, where TUPE/ARD or similar Where RPA or AI is involved in the service provision, legislation applies, that offer and acceptance rewards some or possibly all of the employees previously are used to re‑engage employees who are involved in providing the services within the customer organization providing a given service that is to be outsourced, and that may have become redundant during the period of these employees may transfer to the supplier upon the service supply as a result of the deployment of RPA or commencement of service provision. AI. It follows that there may be few, if any, employees to transfer back to the customer or onward to a new Generally, when a customer transfers its employees supplier, and a resulting loss of know-how transfer. to the supplier, it may expect to transfer those of the supplier’s employees (or at least a skilled and Upon contract termination, if the AI system is licensed knowledgeable subset of them) who were providing the software, it may well remain with the supplier, along services either back to the customer or onward to a with the experience and machine learning that it has replacement supplier where the services are terminated. developed during the provision of the services. In that From a customer perspective, this is aimed at ensuring context, it is important to address in the contract how that the customer can continue with the services directly to reimport information into a new AI system so as to (or with third parties) with the same standards of service have an accelerated period of learning. Exit provisions and with the benefit of relevant know-how, as well as not are accordingly becoming more relevant, and also need saddling the supplier with staff it no longer requires and to address who will own, or have rights to use, the IPR in the associated workforce restructuring issues. the itself, at least insofar as it represents a reflection of the customer’s activities and operations.

www.dlapiper.com | 07 LIABILITY undertake the work or even meaningfully recreate the AI systems to undertake that work is badly compromised. At present, heads of uncapped loss are negotiated The literal loss of corporate memory would be acute. assuming failure modes that we have seen in other contracts where the work is done by humans. However, The net result is that failures are likely to be a rarer if a substantial portion of the work is to be undertaken species, but potentially more severe. The potential using artificial intelligence, the most likely failure modes for lower-value claims from the customer against the will be different, and the traditional liability positions take service provider is perhaps reduced, but the customer on a new significance. will remain very nervous about a major outage and even more concerned about the loss of those precious Where AI is undertaking an increasing share of the work, experience patterns that represent the AI itself. with humans checking only a small portion of its output, errors might accumulate more rapidly and be caught less As a result, the traditional approach whereby suppliers frequently. Similarly, whilst a machine might generally are nervous of and customers often accepting of a work more quickly than a human work , and work position where the supplier takes little, if any, liability twenty-four hours a day instead of eight hour shifts, the for the business impact or even for loss of data on resilience of the machine needs to be considered. If it customers, may require, at least from a customer goes down, that is the equivalent of every person in a perspective, a re-think – AI and RPA are not providing human workforce not turning up: no work gets done. a service to support the business, they have become This makes low-level failure – the type that a service level the business. Similarly, customers may see the lower and service credit regime in a contract might be designed end of the current market-standard financial caps as not to avoid – less likely, and catastrophic failure a bigger being sufficient if the truly catastrophic failure occurs, issue. whereas a supplier will want to achieve an ongoing balance between the risks it can accept versus its reward, In addition, depending on the nature of the system and its together with its inherent capability and desire to be able ability to back-up its ‘experience’ in the form of its stored to take on material liability for what might be perceived patterns for processing the work, if that pattern is lost as ‘run of the mill’ services. after the human work force that was previously doing the work has moved on, then the customer’s ability to COMMITTED BENEFITS One of the principal benefits of deploying an AI or RPA solution is to reduce and eradicate costs on a long-term basis. Many contracts already include an element of commitment to benefits on the part of the supplier as part of the deal and which will often be achieved by the implementation of AI and RPA (as well as some more traditional methods such as process improvement and rate arbitrage). We believe most major AI and RPA deals (whether standalone or as part of a broader outsourcing) will contain a level of committed benefits whereby the supplier contractually promises to save the customer money and if this is not achieved will pay the customer an amount to make up the shortfall. The likely quid pro quo will be a request from the supplier to share in the excess saving.

08 | Artificial Intelligence, Robotics and Automation Contractualising the mechanism by which the savings the underlying change is not delivered. This is dangerous are committed is absolutely key, and often fundamental because (i) the supplier might not be able and willing to the rationale of selecting that supplier over another. to stand behind it in the long term and so it might This will likely require a clear understanding of the lead to a negotiation and an unravelling of one of the baseline of costs against which the saving is to be fundamentals of the deal and (ii) without the delivery of delivered, what the saving actually is and how it is to be the transformation project, the customer is not being quantified and an explanation of how the customer can transformed and so on exit will be – operationally – be sure that the saving has actually been delivered and even further behind its desired state than it was at the that it is sustainable. beginning of the transaction.

PROTECTION ON EXIT REGULATORY OVERSIGHT A key risk with an AI or RPA deployment is the term In many sectors, not least the Financial Services of the relevant licence and what happens if/when this sector, customers are subject to increasing regulation comes to an end; the same risks exists at the end of an in connection with their use of technology and outsourcing transaction of which AI or RPA plays a key outsourcing to support their business. Many of the part. If deployed RPA or AI is suddenly removed, the regulatory requirements are aligned to “traditional” customer may be faced with a material spike in costs as outsourcing models and can be difficult to apply directly it has to replace the solution, both temporarily and then to transactions that have a heavily automated aspect to permanently. There is also a significant loss of knowledge them. Ensuring regulatory compliance whilst achieving which could impact on the customer’s ability to conduct the full benefits of an outsourcing harnessing AI and its business. Accordingly, it is key to address three major robotics will need to be a carefully approached task. issues before the contract is signed, rather than leaving these to be dealt with upon exit. These issues are: INTERPLAY WITH OTHER SOFTWARE 1) the “leave behind” IPR, whether owned or licensed to Whilst some AI or automated systems might operate on a the customer – this will need to cover configurations of standalone basis, more often than not the relevant systems software, manifestations of business rules applied by the will be connecting to and interacting with other systems customer, process improvements and anything embedded within the wider IT environment. Where this happens, within a process that would “break” the process if it was the terms upon which software running within the wider removed; environment (i.e. that the AI or automated system might 2) a continuing standalone licence to the AI/RPA interact with) need to be considered. It may not be the case software – it is preferable to negotiate a standalone that the contemplated interaction falls within the scope licence to the version of the software being used by the of the licence applicable to such third-party software, or customer at the point of exit, on terms that can survive if it does, it may trigger provisions which impact upon the a termination, even if this means a separate fee is payable licence fee for that third-party system. for it; and Many software licences now specifically address the 3) an obligation to deliver the transformation itself, situation where the licensed system is to interface with not just the commercial benefits – most AI/RPA-heavy AI or other form of automated system instead of human deals are of course transformative. Where the deal users. In extreme cases, this form of interaction might involves a level of committed benefits, there is a risk simply be prohibited. In others, the terms might provide that the customer concentrates on the supplier “cutting for differential fee structures based upon how the system a cheque” to achieve the benefits, even if operationally is to be used. For instance, where software is licensed on

www.dlapiper.com | 09 the basis of a fee per user or fee per ‘seat’, any human users As with most computer systems however, the old might count as a single user, whereas automated systems adage of ‘garbage in, garbage out’ still applies. It is quite count as multiple users – commonly between 3 and 10 – possible that, if an apparently highly performing system is on the basis that the automated systems have the potential continually exposed to poor quality data, or data which to use the system at a significantly greater rate than a suggests incorrect decisions are in fact correct, then its human user might. accuracy – in objective terms – will gradually diminish (but its accuracy in performance terms will be high). There is some plausible logic for this when looked at from the perspective of the software vendors. If large numbers of Any biases, inaccuracies or bad assumptions which are human users are rapidly replaced by a much smaller number present in the human users (whose actions form the of automated systems, and those automated systems only training data used to train the system) will be reflected count as one ‘user’ despite doing the same volume of in the decisions made by the trained system. Similarly, work as several human users might previously have done, if the system is continually being fed data from different then the vendor’s future revenue stream will soon dry up. sources, and one source is continually providing incorrect The ongoing cost of support and future development work on the decisions taken by the machine learning does not change, but now needs to be spread across a system, that will impact upon the accuracy of the system. smaller population of largely robotic ‘users’ to maintain the In scenarios where a particular AI system is used to same revenue and margin position, so the fee for these new provide services to many different customers, then both types of users has to be greater. Whilst that argument is the platform vendor, and each customer, has an interest logical, from the customer’s perspective a large differential in ensuring that none of the users ‘pollute’ the system by in pricing for apparently the same ‘user’ access might still inputting bad data that could diminish the accuracy of the seem to be an unfair charging scheme. system’s output for all users. In such circumstances, it is To avoid this problem, moving to a ‘pay as you go’ charging in all parties’ interests that each customer commits to scheme based on transactions processed, or compute ensuring the quality of any data fed into the system and power consumed, or some other common ‘cloud’ or ‘X as a commits to avoiding any action which could result in the Service’ type of metric might be sensible. Under those types quality of the system being compromised. of models, the automation problem is solved, as the charges are based on the level of work done with the system, DATA PROTECTION AND INFORMED regardless of whether the work is done by human users or CONSENT an automated system. AI and smart pose some obvious data protection Checking the third-party licence terms of any software with concerns (and we will address such topics in more detail which the AI/automated system will interface should be a later in this series of articles). Such concerns take on a critical part of developing the business case for any AI or new relevance once we take into account the substantial automation implementation project. sanctions that may be applied under the new European General Data Protection Regulation. ERRORS IN DATA AND PERPETUATION OF The main concerns stem from the fact that any AI system MISTAKES by definition is based on the processing of a large volume of In the background section above, we set out how data. Initially, such data may not be personal data within the machine learning-based systems are ‘trained’ via a meaning of the Regulation, but it can become personal data positive feedback loop. In theory, as the system is (i.e. it is attributable to a specific person) or even sensitive exposed to more data, it ought to continually improve data, as a result of deep pattern matching techniques and and the accuracy of its ‘decisions’ should therefore other processing that AI might perform. increase.

10 | Artificial Intelligence, Robotics and Automation This may result in data being processed in a manner for ARE WE HEADING TO ARMAGEDDON? which consent had not been granted, without any other Whether we believe we are or not, the use of RPA relevant justification being applicable, or beyond the and AI is on an exorable journey to transform not boundaries set out by earlier consent. Furthermore, the just sourcing contracts but our day-to-day . AI solution may end up making its own decisions about The continued use of RPA and AI signals a need to the data management, thus changing the purposes laid look again at many transition contract terms through out by the data controller who should be ultimately this new lens to ensure that they continue to be responsible for the data processing. relevant and enable businesses to garner the full Furthermore, depending on the complexity of the system benefit of transformational outsourcing deals and AI and the ability to detect “unusual” activity, it may be and RPA implementation. harder to determine when an AI-based system is being With careful thought and attention to the issues, hacked, making it more difficult to determine whether deploying of AI/RPA can be transformative, there has been a resulting data breach. All such issues competitively advantageous and deliver real business will have to be carefully addressed in the design phase, benefit. Maybe then, AI and RPA will be one of the when it is being decided how an AI solution will function best things to happen after all. and what technical controls can be applied, and also in any agreement between parties involved in using that AI If you would like is discuss any of the issues raised solution to process data. here, please contact your usual DLA Piper contact or email [email protected] Last but not least, and this is a rather pervasive point, it should be carefully determined between the parties who is responsible for what, if there is any dependency, particularly considering all parties that may incur liabilities when dealing with smart robots or artificial intelligence.

www.dlapiper.com | 11 www.dlapiper.com

DLA Piper is a global law firm operating through various separate and distinct legal entities. Further details of these entities can be found at www.dlapiper.com.

This publication is intended as a general overview and discussion of the subjects dealt with, and does not create a lawyer-client relationship. It is not intended to be, and should not be used as, a substitute for taking legal advice in any specific situation. DLA Piper will accept no responsibility for any actions taken or not taken on the basis of this publication. This may qualify as “Lawyer Advertising” requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.

Copyright © 2018 DLA Piper. All rights reserved. | MAR18 | 3289622