Machine-To-Machine Contracting in Theage of the Internet of Things

Total Page:16

File Type:pdf, Size:1020Kb

Machine-To-Machine Contracting in Theage of the Internet of Things

Machine-to-Machine Contracting in the Age of the Internet of Things Marco Loos

Marco Loos Machine-to-Machine Contracting in theAge of the Internet of Things

I. Introduction

The proposal for a Digital Content Directive1 (hereinafter: Digital Content Directive; DCD) applies to contracts whereby a trader supplies digital content to a consumer (Art 1 DCD). However, recital (17) of the preamble to the Digital Content Directive states: Digital content is highly relevant in the context of the Internet of Things. However it is opportune to address specific issues of liability related to the Internet of Things, including the liability for data and machine-to-machine contracts, in a separate way. From this, it is clear that contracts to do with the Internet of Things fall outside of the scope of the Digital Content Directive, and machine-to-machine contracts are among these excluded contracts. The ‘separate way’ in which liability related to the Internet of Things, including liability for machine-to- machine contracts, is to be regulated has not yet been developed. In this paper I will look into the question how such regulation could look like. I will do that on the basis of how four consumer cases are or could be solved where a contract was concluded ‘by’ a refrigerator or another artificial agent on behalf of a consumer. In doing so, I will assume that the trader – a supermarket in cases 1–3) and a supplier of medicine in case 4) – is given a possibility to authenticate that the order was made by me or on my behalf, which implies that I must give it the means to establish my identity. The existence of such contracts concluded by artificial agents is no longer science fiction: in 2016, Samsung put on the market a refrigerator that in fact can order groceries.2 That instruction can be seen as the consumer’s intention to be bound by the subsequent contract, whereby such intention need only be communicated to the supermarket. A similar argument may be made where the consumer did not give a specific instruction for a specific order, but had pre-programmed a rule authorising the refrigerator to submit the order when specific pre-set conditions have been met. The same is true where the consumer has not indicated with which supermarket the contract is to be concluded, but merely pre-sets the specifications that the groceries to be ordered must meet, and the

1 Commission, ‘Proposal for a Directive of the European Parliament and of the Council on certain aspects concerning contracts for the supply of digital content’ COM(2015) 634 final. 2 The refrigerator is already available in the United States. It will be available on the European market as of 2017, Samsung announced in a press release of 28 August 2016, see news.samsung.com/global/samsung-unveils-european- edition-of-family-hub-at-ifa-2016 (accessed 10 November 2016). Marco Loos refrigerator’s inbuilt shopping software agent automatically searches the internet for offers meeting these specifications and comparing prices.3 The situation is more complicated where a contract is concluded on behalf of a consumer where that consumer did not want to conclude that particular contract or no longer is capable of giving consent to the contract. In these cases the question arises whether or not a valid contract is and can be concluded on behalf of the consumer. Contract law – or more broadly: the law of obligations – currently offers two separate approaches to hold parties to contracts they may not have wanted. The first route is the simple procedure of offer and acceptance, treating the artificial agent as a mere tool or instrument for communication, in a way similar to fax machines sending fax messages and computers sending e- mails. This approach allows, under certain conditions, for holding parties to notices that are communicated to the other party even though an error or mistake has been made. The second route starts from the idea that the artificial agent, such as a smart refrigerator, acts on the basis of representation (agency). The law of representation also allows, under certain conditions, for parties to be held to contracts concluded on their behalf by a representative acting outside its authority. With regard to each of these routes I will try to solve the following cases under the provisions of the Draft Common Frame of Reference4 (hereinafter also: DCFR):

3 See for the latter variation C. Wendehorst, ‘Robotics, Artificial Intelligence, and Machine to Machine (M2M) Contracts. With a focus on consumer contracts’, presentation held before the Legal Affairs Committee of the European Parliament on 21 April 2016, slide 4. Available at www.europarl.europa.eu/committees/en/juri/events-hearings.html?id=2016042 1CHE00181 (accessed 10 November 2016). 4 C. von Bar/E. Clive (eds), Principles, Definitions and Model Rules of Euro- pean Private Law. Draft Common Frame of Reference (Full edition) (Sellier 2009). Machine-to-Machine Contracting in theAge of the Internet of Things

Case #1: My refrigerator orders 100 bottles of fresh milk where I had put only one such item in my virtual basket; Marco Loos

Case #2: My refrigerator orders 300 grams of shrimps (which I dislike) instead of 300 grams of lamb (which I love) due to a software error; Machine-to-Machine Contracting in theAge of the Internet of Things

Case #3: My refrigerator orders two crates of beer on the basis of its software’s prediction based on my prior consumption pattern, even though I would have preferred that this order had not been made; Marco Loos

Case #4: My care robot orders medicine for me at a time when I am no longer compos mentis because I suffer from Alzheimer’s disease in an advanced stage.

The paper is developed as follows. I will first indicate what the ‘Internet of Things’ and ‘machine-to-machine contracting’ in the preamble to the Digital Content Directive refers to (section II). Section III addresses the question how contracts concluded by artificial agents on behalf of consumers can legally be attributed to them. In this section I will discuss the two alternative approaches mentioned above – i.e. through offer and acceptance (subsection III.1) and through representation (subsection III.2). I will conclude that the latter approach is more promising (subsection III.3). Section IV discusses whether this second approach requires that artificial agents are recognised as legal subjects themselves and whether they must have ‘an intention’ to enter into a legally binding contract. In section V, I will discuss whether in view of the earlier findings current legislation needs to be amended in order to enable machine-to-machine contracting. Section VI contains concluding remarks. Machine-to-Machine Contracting in theAge of the Internet of Things

II. Internet of Things and Machine-to-Machine Contracting

Central to the exclusion mentioned in recital 17 of the preamble to the Digital Content Directive are the notions ‘Internet of Things’ and ‘machine-to- machine contracts’. These notions explained neither in the preamble or the Digital Content Directive itself nor in the Explanatory Memorandum to the Directive. The simplest description of ‘Internet of Things’ (often abbreviated as IoT) is offered by Mak: ‘The IoT connects devices and vehicles using sensors and the internet.’5 More extensive is the description given by Davies in a briefing paper to the European Parliament. He states that the notion refers to a global, distributed network (or networks) of physical objects that are capable of sensing or acting on their environment, and able to communicate with each other, other machines or computers. Such ‘smart’ objects come in a wide range of sizes and capacities, including simple objects with embedded sensors, household appliances, industrial robots, cars, trains, and wearable objects such as watches, bracelets or shirts.6 Wendehorst, who takes over this description, remarks that the dividing line between smart goods and the Internet of Things is ‘not entirely clear’, as the Internet of Things presupposes smart goods, which may communicate with other smart goods or with the cloud, but may also function on their own. In the latter case, the Commission’s proposal for a Directive pertaining to Online and other Distance Contracts7 would apply.8 Steennot & Geiregat briefly define the Internet of Things as an ‘infrastructure of networked physical

5 V. Mak, The new proposal for harmonised rules on certain aspects concerning contracts for the supply of digital content. In-depth analysis, Briefing note for the Legal Affairs Committee of the European Parliament, PE 536.495 (2016) 9. Available at www.europarl.europa.eu/committees/nl/events- workshops.html ?id= 2160217CHE00181 (accessed 10 November 2016). 6 R. Davies, ‘The Internet of Things. Opportunities and Challenges’, EPRS Briefing, PE 557.012 (2015) 2. Available at www.europarl.europa.eu/ RegData/etudes/BRIE/2015/557012/EPRS_BRI(2015)557012_EN.pdf (accessed 10 November 2016). 7 Commission, ‘Proposal for a Directive of the European Parliament and of the Council on certain aspects concerning contracts for the online and other distance sales of goods’ COM(2015) 635 final. 8 C. Wendehorst, ‘Sale of goods and supply of digital content – two worlds apart? Why the law on sale of goods needs to respond better to the challenges of the digital age’, In-depth analysis, Briefing note for the Legal Affairs Committee of the European Parliament, PE 556.928, 2016, p. 8-9. Available at pwww.europarl.europa.eu/committees/nl/events-workshops.html?id=20160217 CHE00181 (accessed 10 November 2016). Marco Loos objects’9, referring to a paper by Robinson. In that paper,10 Robinson indicates that IoT technology consists of smart devices, protocols for facilitating communication between them, and systems and methods for storing and analysing the data acquired by the smart devices. The technologies and protocols used range from RFID (Radio Frequency Identification, for instance used in textile labels), GPS (Global Positioning System, for instance used in navigation software), GIS (Geographic Information System, for instance used in online maps), and EDI (Electronic Data Interchange, i.e. the computer-to- computer transfer of structured data in a standard electronic format without human intervention). Finally, Fries defines the Internet of Things as ‘a system in which every item has a separate digital identity (so-called smart devices), can be called electronically, and feeds back data into the overall system’.11 The Internet of Things thus is an umbrella notion, which covers a great many of different types of communication that all have the common feature that information is transferred from one smart good to another without human intervention, irrespective of the specific technology used for that communication. Machine-to-machine contracting is only a very limited subtype of this phenomenon, but a legally challenging one. It refers to the communication between smart goods consisting of an order by one smart good and the confirmation thereof by another smart good. Machine-to-machine contracting is, as such, not a new phenomenon: already in 2005, Kafeza et al. described the then existing practice that an ‘agent’ (i.e. ‘a piece of software that is programmed to execute a set of instructions given by the user’) is surfing the web to find beneficial deals for the customer, negotiate a price, and create a contract. In the near future, e-commerce will evolve and agents will be able to negotiate and monitor more complicated deals than simple purchasing of goods. Agents will represent users without being explicitly instructed to do so.12 In this paper, I will refer to such agents as ‘artificial agents’; in order to avoid misunderstanding, I will use ‘representation’ and ‘representative’ instead of ‘agency’ and ‘agent’ in the legal sense.

9 R. Steennot/S. Geiregat, ‘Scope and Liability for a Lack of Conformity in the Proposal for a Directive on Digital Content Contracts’ in I. Claeys/E. Terryn (eds), Online and Digital Contracts (forthcoming) para 13. 10 W. K. Robinson, ‘Patent Law Challenges for the Internet of Things’ (2014– 15) 655 Wake Forest J of Bus & Intell. Prop. L 657. 11 M. Fries, ‘Man versus Machine: Using Legal Tech to Optimize the Rule of Law’, 6, footnote 24. Available at papers.ssrn.com/sol3/papers.cfm? abstract_ id=2842726 (accessed 10 November 2016). 12 I. Kafeza/E. Kafeza/Dickson K.W. Chiu, ‘Legal Issues in Agents for Electronic Contracting’ in Proceedings of the 38th Hawaii International Conference on System Sciences – 2005, 1. Machine-to-Machine Contracting in theAge of the Internet of Things

Initially, machine-to-machine contracting was largely restricted to businesses and served to ensure that stocks were optimised, ensuring that a trader did not have too few, but also not too many products available for sale. However, the possibility of consumers making use of such artificial agents is becoming a reality, as the appearance of the Samsung refrigerator on the consumer market demonstrates – albeit that the consumer still has to put items in a virtual basket, so an explicit instruction is – at this point in time – still necessary.13 The availability of smart goods capable of ordering goods or services on behalf of consumers raises the question how contracts pertaining to the ordered goods or services are concluded. Where the consumer puts a certain item in the virtual basket herself or pre-sets the conditions under which the refrigerator is to order groceries, one can still relatively easily accept the idea that the consumer is bound by the contract concluded ‘on her behalf’ by the refrigerator, as this is an action of the consumer herself and in both cases the subsequent order is merely an execution of a pre-set intention of the consumer to contract. It becomes a little more complicated when, due to a software error, the artificial agent orders something else than I wanted it to do (case 2) or orders the groceries in different quantities than intended (case 1). It becomes considerably more complicated if the software is self-learning, self-thinking, and capable of predicting future needs. For instance, imagine that every four years I have consumed significantly higher numbers of beers during the Football World Championships and that the refrigerator’s software has noticed this pattern. On the basis thereof the refrigerator pre-orders additional beers for the upcoming 2018 tournament (case 3). However, what the software had not noticed, is that in the preceding tournaments the Dutch football team had not only qualified for the tournament, but also had done exceptionally well during these Championships (coming in second in 2010 and third in 2014). Let us assume, however, that in 2018, the Dutch football team does not even qualify, so there is little reason for me to intensively watch the matches and I have no use for the extra bottles of beer ordered by my refrigerator. In this situation, the refrigerator is acting more or less autonomously and potentially purchases goods or services that I as a consumer may not want. Finally, in case 4, the situation is such that the consumer is no longer capable of determining her own mind and therefore does not have any intention to conclude any contract when her care robot

13 See K. Barry, ‘Meet the fridge that orders groceries and finds recipes. Samsung’s Family Hub is a digital butler trapped in a fridge’, available at refrigerators.reviewed.com/news/meet-the-fridge-that-orders-groceries-and- finds-recipes (accessed 10 November 2016). Marco Loos orders medicine on her behalf. In each of these cases the question arises whether contract law could provide a sufficient argument why the consumer should or should not be held to the contract concluded on her behalf. Machine-to-Machine Contracting in theAge of the Internet of Things

III. Legal Basis for Consumers being Bound by Contracts Concluded by Way of Machine-to-Machine Contracting Marco Loos

1. Artificial agent as a mere tool or instrument for communication

The first route through which a consumer could be held to a contract she did not intend and did not want to conclude is the simple procedure of offer and acceptance. In this approach, the artificial agent is merely an instrument or tool for communication between the consumer and the business. In this way, the artificial agent would basically be treated in the same way as a fax machine sending a fax message: as an instrument necessary to communicate the consumer’s intention to be bound by a contract with the trader. The fact that the artificial agent could ‘learn’ and adapt its own instructions would basically be ignored, or rather: would be attributed to the consumer making use of the artificial agent in the conclusion of the contract. The fact of the matter is that I had plugged in the refrigerator, I had enabled it to connect to the Internet, and I had not decommissioned the artificial agent, thus enabling the refrigerator to act on my behalf. These actions together could be interpreted as conduct giving the supermarket reason to believe that I indeed had the intention of entering into a contract of the sort indicated in the order sent by my refrigerator. In this sense, as Allen and Widdison argued already in 1996, the mere fact of having a computer available to make or accept offers in fact amounts to a promise by the owner14 of that computer to be legally bound by any transaction made through that computer.15 Let us see how the four cases developed in section I would be solved under this approach.

14 For the purposes of this paper it is of no relevance whether the consumer is the owner, the keeper or the possessor of the robot – the word ‘owner’ is used as a generic term here to signal a relationship between the consumer and the robot on the basis of which the consumer may determine the actions of the robot and/or is liable for the robot’s actions. 15 Cf. T. Allen/R. Widdison, ‘Can Computers Make Contracts?’ (1996) 9 Harvard Journal of Law and Technology 25, 49. Machine-to-Machine Contracting in theAge of the Internet of Things

Case #1: My refrigerator orders 100 bottles of fresh milk where I had put only one such item in my virtual basket;

Under Art II.–4:101 DCFR (Requirements for the conclusion of a contract), a contract is concluded when (1) the parties intend to enter into a binding legal relationship and (2) they reach a sufficient agreement. In this case, the terms of the contract would be sufficiently defined as the number and the type of the goods purchased are defined and the price can be determined, in accordance with Art II.–4:201(3) DCFR (Offer), as the communication made by the refrigerator may be seen as a proposal to supply the bottles of milk from stock at the price stated by the supermarket, and such proposal only needs to be accepted by the supermarket, Art II.–4:204 DCFR (Acceptance) provides. That I in fact wanted to make a different offer, may be seen as an inaccuracy in communication. This particular inaccuracy should have been recognised by the supermarket as consumers typically do not order 100 bottles of milk. It is, however, not clear how many bottles I intended to order. If the supermarket nevertheless simply accepts the offer, a contract would be concluded, but I would be allowed to avoid the contract on the basis of mistake.16 One could of course argue that in these circumstances no contract should be concluded in the first place, but the result achieved here is due to a policy choice made by the drafters of the DCFR, and applies to all inaccurate messages received by the offeree, whether sent by me in person or sent through an artificial agent. The first route therefore seems to lead to an acceptable result for case 1). However, since the contract is concluded through electronic means without individual communication, the supermarket is required to make available effective and accessible means for identifying an correcting input errors before my offer is final, Art II.–3:201 DCFR (Correction of input errors) provides. In accordance with Art 11(2) E-Commerce Directive17, this provision is mandatory in consumer contracts.18 This implies that the supermarket is required to react to the order with a message (whether or not automated) allowing for the correction of a possible input error. The Article does not indicate that the message is to be sent to my personal account, so it could be interpreted in such a way that the message would be sent to the account used by my refrigerator. Such message is likely not to reach me at all. If, on the other hand, the message were sent to my personal account, there is a substantive risk that the message will not be noticed as daily many messages of this sort may be

16 Cf. von Bar/Clive (n 4) Comment D to Art II.–7:202 DCFR (Inaccuracy in communication may be treated as mistake). 17 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce [2000] OJ L178/1. 18 See Art II.–3:201(3) DCFR (Correction of input errors). Marco Loos received, making it virtually impossible for me to distinguish the relevant messages from the irrelevant ones. This implies that in the case where orders are placed by an artificial agent, the provision on the correction of input errors can only then be successful if a message is sent to my personal account only in those case where an order received, which is atypical and therefore may need to be corrected. This suggests that Art II.– 3:201 DCFR (Correction of input errors) does not function properly in the case of contracts concluded through an artificial agent. Machine-to-Machine Contracting in theAge of the Internet of Things

Case #2: My refrigerator orders 300 grams of shrimps (which I hate) instead of 300 grams of lamb (which I love) due to a software error.

In this case the order of 300 grams of shrimps is nothing out of the ordinary, so there is no reason why the supermarket could not reasonably believe that the order was not in accordance with my wishes. If the supermarket confirms the order, a valid contract is concluded, as the inaccuracy of the message does not lead to a ground for invalidity.19 This seems to be in accordance what common sense seems to dictate as this problem – in the relationship between the consumer and the supermarket – originates entirely within the consumer’s domain. Moreover, there is also a substantive reason why the supermarket should be entitled to rely on the order being correct: the supermarket has no possibility to pass on the negative consequences of the fact that the contract has not been concluded, whereas I may – at least in theory – resort to the producer of the refrigerator or the producer of the software on the basis of tort law or product liability law, or to the supplier (seller) of the software on the basis of contract law. The latter course of action brings us back to the proposal for a Digital Content Directive, as this would then qualify as a lack of conformity of the software under Art 6 DCD.20 The first route therefore seems to lead to the proper solution for case 2). However, the same argument regarding an input error could be made here, and the same objections to the current rules dealing with this situation may be brought forward.

19 See von Bar/Clive (n 4) Comment D to Article II.–7:202 DCFR (Inaccuracy in communication may be treated as mistake). 20 On the question of conformity see the contribution by A. Colombi Ciacchi/E. Van Schagen, ‘Conformity under the Draft Digital Content Directive: Regulatory Challenges and Gaps’, in this volume. Marco Loos

Case #3: My refrigerator orders two crates of beer on the basis of its software’s prediction based on my prior consumption pattern, but I would have preferred that this order had not been made.

The only relevant difference between cases 2) and 3) is that here the ‘error’ was caused by the imperfection of the refrigerator’s prediction. From the perspective of classical contract law theory, one could argue that since this contract was concluded on the basis of the refrigerator’s self-learning capabilities, this particular contract no longer can be attributed to my decision to plug in the refrigerator, to enable it to learn from previous experience and to allow it to conclude contracts for me. In any case, the relation between my initial decision to enable the refrigerator and the contract concluded on my behalf is rather thin here. Nevertheless, it could be argued that this contract ultimately results from earlier decisions made by me, and that this justifies that the submission of the order is still attributed to me. In this view, I would have to assume the risk of a wrong order as there is no reason for the supermarket to question the order, so I will be bound by the contract. In so far as the refrigerator’s software was flawed, I may have recourse against the seller of the refrigerator or the operator of the refrigerator’s software. Therefore, it may be argued that the first route also leads to the proper solution for case 3). Machine-to-Machine Contracting in theAge of the Internet of Things

Case #4: My care robot orders medicine for me at a time when I am no longer compos mentis because I suffer from Alzheimer’s disease in an advanced stage.

This case is problematic under the first route. The Draft Common of Reference does not deal with lack of capacity.21 Moreover, there is no Community legislation in force, and it is likely that the matter will be seen as to politically sensitive for any European legislation to be developed in the coming years. This implies that the validity of this contract would have to be ascertained under the applicable national contract law. Moreover, national law may distinguish between a situation where I am legally incapacitated by court order, in which case a legal guardian will have been appointed to take care of my affairs, and a situation where this has not (yet) happened. In the former situation, the contract is likely to be either void or voidable unless the legal guardian has approved of the contract concluded by the care robot; in the second situation this is uncertain. Since from the perspective of the consumer (or patient) this contract should probably be considered to be valid, the first route does not seem to offer a proper solution for case 4).

21 See Art II.–7:101(2) DCFR (Scope). Marco Loos

2. Artificial agent as representative

The second route for holding the consumer bound to contracts concluded on her behalf by an artificial agent is to consider the artificial agent as a representative (i.e. as an agent in the legal sense). A preliminary question is whether the representative must be a ‘person’ and whether it must have an ‘intention’ to conclude the contract (on behalf of the consumer). This matter will be briefly discussed in section IV; for the sake of the argument, in this section I will assume that the artificial agent indeed may be seen as a representative in the legal sense and that the communication from the artificial agent to the supermarket or the supplier of a medicine is intended to legally bind the consumer to a contract with the supermarket or the supplier of a medicine. Contracts concluded by the artificial agent in the name of the consumer would then be treated as contracts concluded by the principal (i.e. the consumer on whose behalf the artificial agent acted) in so far as a) the artificial agent was authorised to act by the principal on the principal’s behalf – which requires a decision by the principal allowing the artificial agent to act – or the third party was led to believe by the principal that such authorisation had taken place; b) the artificial agent acted in the name of the principal – which requires the communication of information to the third party enabling the third party to identify the principal; c) the artificial agent acted within the boundaries of the authority conferred to it by the principal, or in so far as the third party could reasonably and in good faith believe, and in fact believed, that the artificial agent did act within these boundaries.22 Let us see how the four cases would be solved under this approach, so we may ascertain whether this approach would lead to reasonable results.

22 See Arts II.–6:103 (Authorisation) and II.–6:105 DCFR (When represent- tative’s act affects principal’s legal position). Machine-to-Machine Contracting in theAge of the Internet of Things

Case #1: My refrigerator orders 100 bottles of fresh milk where I had put only one such item in my virtual basket.

In this case, the refrigerator was not authorised to conclude this contract. However, similar to the first route, the mere fact that I had plugged in the refrigerator, had connected it to the Internet, and had not decommissioned the artificial agent enabling the refrigerator to act on my behalf may be interpreted as an implied authorisation under Art II.–6:103(3) DCFR (Authorisation) – at least in the sense of giving the supermarket reason to believe that I had in fact authorised my refrigerator to act on my behalf. In this sense, the refrigerator has obtained a general authorisation not restricted to the conclusion of a particular contract, so condition a) has been met.23 Condition b) has also been met, since I had given the supermarket the means to establish my identity as a principal. However, the refrigerator exceeded the limits of its (assumed) authority. This implies that the contract does not bind me, unless the supermarket may reasonably believe that the contract concluded is in accordance with my intention as the party on whose behalf the refrigerator acts. This is not the case here, since an order of 100 bottles of fresh milk is not a normal order for a consumer, hence no valid contract is concluded. This implies that the outcome of this case would seem to be different than under the first route. Yet, a similar problem regarding input errors may arise, as also here the supermarket must offer a possibility to correct input errors. Moreover, since representation does not lead to a legal relation between the representative and the supermarket, but to a contract between the supermarket and me directly and therefore to a consumer contract,24 the rules on correcting input errors, are mandatory also in this approach.25 This means that solving this case through representation leads to the same problems as under the first route, as the rules on input errors under the DCFR would lead to the conclusion of a contract, albeit that the consumer may have means available to undo the consequences thereof. This

23 In this respect one should recognise that the situation of a falsus procurator, regulated in Art II.–6:107(2) DCFR (Person purporting to act as representative but not having authority), does not apply to contracts concluded on behalf of an artificial agent, as the artificial agent is treated as having obtained general authority. 24 See Art II.–6:105 DCFR (When representative’s act affects principal’s legal position). 25 See Art 11(2) E-Commerce Directive and Art II.–3:201(3) DCFR (Correction of input errors). Marco Loos suggests that the rule on input errors in the DCFR (and the E-commerce Directive) is actually not the proper one – at least not for machine-to- machine contracting.26

26 See in this sense also Wendehorst (n 3) slide 7. Machine-to-Machine Contracting in theAge of the Internet of Things

Case #2: My refrigerator orders 300 grams of shrimps (which I hate) instead of 300 grams of lamb (which I love) due to a software error.

Here, the lack of authority is not apparent to the supermarket, which implies that I am deemed to have authorised the refrigerator to act as my representative, and therefore I am bound to the contract. The outcome is therefore the same as under the first route, but again the same uncertainty as to the rules on input errors applies. Marco Loos

Case #3: My refrigerator orders two crates of beer on the basis of its software’s prediction based on my prior consumption pattern, but I would have preferred that this order had not been made.

In this case, the refrigerator makes a decision of its own on the basis of the operation of the instructions it has modified or self-developed. Similar to the previous two cases, I have not given an explicit instruction to the refrigerator to conclude the contract, but the refrigerator may be seen as having been granted a ‘general authorisation’. Since the order of two crates of beer is nothing out of the ordinary, so there is no reason why the supermarket could not reasonably believe that the refrigerator would not be authorised to conclude this contract. I am therefore bound by it. Again the outcome is the same as under the first route. Machine-to-Machine Contracting in theAge of the Internet of Things

Case #4: My care robot orders medicine for me at a time when I am no longer compos mentis because I suffer from Alzheimer’s disease in an advanced stage.

In the case where I am no longer able to express my intentions, or where I am legally incapacitated, it is clear that I no longer can authorise the care robot to act on my behalf. However, the law of representation expressly recognises that the authority of the representative need not derive from the principal herself, but may also be granted by law.27 Where the law, under certain conditions, allows for a legal guardian to be appointed to represent the legal interests of a consumer, it may also authorise care robots or other artificial agents to do so. Where the care robot would act within the boundaries of its authorisation, it would lead to a contract legally binding me to a contract with the supplier of the medicine. Where the care robot would exceed its authority (e.g. by ordering too many products or other products than the required medicine) it would depend on the legislation regulating the appointment of a legal representative whether or not such contract s would be valid, void or voidable. Under the law of representation, and in accordance with common sense, the consumer would be bound to a contract in the cases 2, and 3 and not in case 1, whereas that case 1 would be lead to a valid contract under the first route. Moreover, case 4 could more easily be solved than under the first route, as Community legislation would be much less sensitive and could be developed much more easily for the law of representation than for the law governing incapacity.

27 See Art II.–6:103(1) DCFR (Authorisation). Marco Loos

3. Intermediate conclusion

The discussion of the four cases under the different routes shows that cases 2 and 3 are solved by treating artificial agents as tools or instruments in the same manner as when they are treated as representatives. Case 1 is solved differently, but in both approaches the rules on the correction of input errors would cause problems. Case 4 can only be solved through the application of national contract law, and Community legislation is neither available nor foreseeable due to the sensitive nature of rules on incapacity, but the building blocks for a solution through the operation of the law of representation seem to be already in place. The above does not lead to a firm reason as to whether the first or second approach should be preferred. Personally, I feel that the law of representation might produce better results than the rules under which artificial agents are treated as mere tools or instruments. However, the question whether or not artificial agents may be treated as ‘persons’ capable of expressing an ‘intention to contract’ has so far left been open. Let us turn to that matter now. Machine-to-Machine Contracting in theAge of the Internet of Things

IV. Legal Personality and ‘Intention to Contract’ of Artificial Agents

If the approach is taken that the artificial agent should be treated as a representative (i.e. as an agent in the legal sense), two doctrinal objections must be tackled. The first objection stems from the assumption that only ‘persons’ can act as representatives. This assumption is even hidden in the definition of a representative under Art II.–6:102(1) DCFR (Definitions), which describes a representative as a person who has authority to affect directly the legal position of another person, the principal, in relation to a third party by acting on behalf of the principal’ (emphasis added). The idea that the representative must be a ‘person’ seems to be so self-evident that the matter is not even raised in the Comments to Art II.–6:102(1) DCFR (Definitions). One way to overcome this objection is to recognise artificial agents as legal persons, or as ‘electronic persons’28, i.e. as an additional category of legal subjects next to natural persons and legal persons. This approach could work well for artificial agents that are embedded in instruments or tools that may move around in both the physical and the online world and that combine some form of intelligence with some kind of autonomy, enabling them to learn from experience, modify their own instructions and develop new instructions to follow.29 Such instruments are commonly referred to as robots. Robots are usually described as autonomous machines able to perform human actions, and as such have a physical appearance, at least some capability of carrying out actions of their own (i.e. without human intervention), and some similarity to human beings.30 Palmerini et al. emphasise, however, that until robots are able to auto- replicate,

28 See the draft motion for a European Parliament Resolution on Civil Law Rules on Robotics under T. and under 31. sub f), included in the European Parliament’s Legal Affairs Committee’s Draft Report with recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103(INL), draft report of 31 May 2016, PE582.443v01-00. 29 M. Laukyte, ‘Software Agents as Boundary Objects’ in M Palmirani et al. (eds), AI Approaches to the Complexity of Legal Systems Models and Ethical Challenges for Legal Systems, Legal Language and Legal Ontologies, Argumentation and Software Agents (Springer 2012) 209. 30 Cf. E. Palmerini et al., Guidelines on Regulating Robotics, report for the European Commission, 2014, 15, available at www.robolaw.eu (accessed 28 September 2016). Palmerini et al., however, themselves do not make use of a definition, but of a plurality of uses and applications, see 16. Marco Loos

their teleology will always be derived from human beings. This means that notwithstanding the possibilities offered by technological advancements in artificial learning, intelligence, consciousness, and sentience, the human being will be always the ultimate responsible for the robot design and use.31 In this respect, recognising robots as legal subjects would not be much different from recognising legal persons as such, as there is always a human being who is ultimately responsible for their actions. Yet, there is another difference between the two. The recognition of legal persons as a separate category of legal subjects is intended to enable, for instance, companies and associations to act separately from their founders, and to enable them to be the bearer of rights and obligations themselves. The same could apply for robots that could then be equipped with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.32 However, conferring legal personality to robots would not imply that robots (need to) have rights and obligations themselves, but would – at least for now – only serve the purposes of being able to hold them liable in case of damage caused by them and of enabling robots to act on behalf of the consumers and businesses that own them on the basis of the law of representation, as conferring legal personality to the robot would qualify the robot as a person that could act as a representative. The first purpose only makes sense if robots could also become the owner of assets, as otherwise the robot would not have property from which the victim of damage could be compensated.33 Moreover, current national contract law and tort law rules may provide sufficient relief for the victim to claim compensation from the owner of the robot, in particular in so far as the national legal systems accepts strict liability instead of requiring fault. In addition, product liability rules may in some cases provide for an additional party that may be liable.34 This suggests that recognising robots as legal (or electronic) persons is not necessary in order to safeguard the interests of potential victims.

31 ibid, 17. 32 See the draft motion for a European Parliament Resolution on Civil Law Rules on Robotics under 31 sub f), included in the European Parliament’s Legal Affairs Committee’s Draft Report with recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103(INL), draft report of 31 May 2016, PE582.443v01-00, 12. 33 See also H. Saripan et al., ‘Are Robots Human? A Review of the Legal Personality Model’ (2009) 34 World Applied Sciences Journal 824, 827. 34 The European Parliament’s draft resolution also points into this direction; see the draft motion for a European Parliament Resolution on Civil Law Rules on Robotics under U. Machine-to-Machine Contracting in theAge of the Internet of Things

The second purpose would leave unresolved how contracts could be concluded by other artificial agents then robots, such as Samsung’s smart refrigerator, unless the recognition of the artificial agent as a legal entity would be extended to include also non-robots. If this would not be the case, the question arises which artificial agents should receive recognition as a legal or electrical person? When is an artificial agent sufficiently ‘artificially intelligent’ to be the bearer of its own rights and obligations? Distinguishing between sufficiently and insufficiently artificially intelligent agents seems to be a slippery slope – just as much as it is difficult, if not impossible, to distinguish between legal persons that are worthy of protection similar to consumers and legal persons that should not receive such protection. In addition, there is also an argument against conferring legal personality to artificial agents – and all the more so if also non-robots would be considered to be legal persons. To what exactly is it that we would confer legal personality – is it the hardware, i.e. the physical body of the robot or the refrigerator, or rather the software that operates the hardware? And if the latter would be the case, how do we deal with the fact that the software may be placed on several sites and servers and maintained by different individuals?35 This suggests that also in order to achieve the second purpose the recognition of artificial agents, or of just robots, does not appear to be necessary. As such, at this point in time,36 there does not seem to be sufficient reason to recognise artificial agents as legal persons.37 However, in order for artificial agents to act as representatives under the law of representation, the implied requirement that the representative is a person in the legal sense would have to be abandoned. This need not be problematic in so far as the owner of the artificial agent would ultimately be strictly liable for the artificial agent’s actions in the sense that the artificial agent is deemed to have acted within the boundaries of the authority awarded to it unless the third party had reasons not to rely on the artificial agent’s authority – as would be the situation under case 1, but also in the other cases if the consumer had previously informed the supermarket or the supplier of the medicine that she does not want a specific contract to be concluded on her behalf by the artificial agent. Moreover, abandoning the idea that the representative must be a person is not contrary to the idea of representation as such – in fact, it is rather

35 Cf. Allen/Widdison (n 15) 42. 36 This may change in the future, see Laukyte (n 29) 209–210. 37 In this sense also Saripan et al. (33) 828. Marco Loos returning to the origins of the law of representation in Roman law – where a slave could represent her owner without being a person on the legal sense of the word, and the owner being strictly liable for the actions of the slave.38 A second objection to the application of the law of representation to the conclusion of contracts by artificial agents on behalf of a consumer is whether artificial agents are capable of entering into contracts at all, as it is doubtful whether artificial agents can have an intention or will themselves.39 In line with this objection, Schramm argues that since artificial agents do not possess legal capacity, it is difficult to identify, from a contract law perspective, a concept which would allow attributing legal effects of an artificial agent’s action to a human.40 This argument could lead to the conclusion that artificial agents by their very nature are incapable of acting as representatives in the legal sense. In my view, this argument is not tenable, both on practical and on fundamental grounds. First, in cases 1 and 2, the refrigerator communicates the consumer’s own wishes (albeit in a distorted way), so it could be argued that it suffices that the principal has an intention to contract. This is admittedly more difficult to uphold for cases 3 and 4, as in these cases there is no expressed intention of the consumer. In some cases an earlier framework contract may have been concluded,41 whereas in the case of incapacity a court’s decision to appoint a legal guardian may replace the individual consumer’s intention. But even apart from that, one should recognise that the requirement of the representative having an intention itself is not posed when it comes to legal persons. Legal persons as such are but a construction of law, and they are incapable of having an intention to act themselves: when a legal person enters into a contract, the responsible natural persons may have an intention for the legal person to be bound by the contract, but attributing that intention to the legal person is a legal fiction itself. Nobody questions the possibility for legal persons to act as representatives in order to bind natural persons or other legal persons to a contract concluded on their behalf.

38 See the introductory remarks made in the contribution by R. Schulze/D. Staudenmayer/S. Lohsse, ‘Contracts for the Supply of Digital Content: Regulatory Challenges and Gaps – An Introduction’ in this volume. 39 See P. Schramm, ‘Contract Law and Liability in the field of Robotics’, presentation held before the Legal Affairs Committee of the European Parliament on 21 April 2016, slides 5–7, available at www.europarl.europa.eu/ committees/en/juri/events-hearings.html?id=20160421CHE00181 (accessed 10 November 2016). 40 ibid, slide 8. 41 See in this respect Wendehorst (n 3) slide 7. Machine-to-Machine Contracting in theAge of the Internet of Things

This implies that opening up the possibility for artificial agents to act as representatives in the legal sense requires a political or a policy decision and does not constitute a dogmatic impossibility as such. If the law changes, legal theory will accommodate for this approach. Summarising the above: if a legal framework were adopted in order for artificial agents to conclude contracts on behalf of consumers, a slight amendment of the law of representation would be necessary, allowing artificial agents to conclude contracts on behalf of consumers even though the artificial agents are not persons themselves and although they most likely do not have an intention or a will to conclude contracts in the ordinary sense. The possibility for artificial agents to act as representatives would therefore be the consequence of a legal construction expanding the law of representation. Marco Loos

V. Other Amendments to Current Legislation

If the law of representation is amended as suggested, the question remains whether national contract and tort law and the acquis communautaire need also to be amended. In this respect, recognising artificial agents as representatives without conferring legal personality to them has the advantage that in many cases the actions of the artificial agent may be attributed to the owner of the artificial agent. However, as indicated above, it may necessary to introduce or maintain strict liability rules in contract and tort law in order for the owners of the artificial agent to be held liable for damage caused by artificial agents to third parties, in particular in the case of self-learning and self-thinking artificial agents, for whose actions the producer of the artificial agent is likely not to be liable.42 As regard European Union legislation, the concept of pre-contractual information obligations and the right to correct input errors may need to be reconsidered. The law of representation would attribute any pre- contractual information provided to the artificial agent as having been provided to the consumer. Whereas this would not be problematic in most cases, this could be different as regards the right of withdrawal, as for this right to be effective the consumer needs to be aware of it herself. It could be argued that for this reason, the information must be communicated to the consumer herself instead of to the artificial agent acting on her behalf. The current provisions on the correction of input errors could be amended in a similar fashion. As indicated in section III, in the case where orders are placed by an artificial agent, the provision on the correction of input errors can only then be successful if a message is sent to my personal account, and only in those case where an order received, which is atypical and therefore may need to be corrected. It is suggested therefore that the provision on input errors in the E-Commerce Directive is amended in a number of ways. First, traders should only be able to rely on orders they can reasonably consider to be in line with the consumer’s intentions. Second, where a consumer makes use of an artificial agent in concluding a contract, the trader should be required to offer the consumer to correct a potential input error herself, but the trader should only offer such a possibility if she has reason to believe that the communicated order may not be in line with the consumer’s intentions. This should then ensure that

42 See the draft motion for a European Parliament Resolution on Civil Law Rules on Robotics under Z. Machine-to-Machine Contracting in theAge of the Internet of Things unlikely orders are not processed through an automated process but through an individualised discussion between the consumer and the trader. Marco Loos

VI. Concluding Remarks

In this paper I have argued that the use of artificial agents in the conclusion of contracts between consumers and traders need not lead to drastic amendments of current legislation. Whereas the rules on offer and acceptance will lead to the proper result in most cases, this approach may not be reconciled with the emergence of self-learning and self-thinking artificial agents. In this respect, I prefer the application of rules on representation over those on offer and acceptance, as this approach seems more in line with the needs of commercial practice. There does not seem to be a compelling argument to recognise artificial agents as legal or electric persons, but the law of representation would need to be amended in order for artificial agents to act as representatives. In addition, contract law and tort law may need to be revised to introduce – where this is not already the case – strict liability of the owner of an artificial agent for damage caused by the actions of the artificial agent. Finally, European legislation pertaining to the correction of input errors currently does not seem appropriate to deal with input errors in automated messages sent by an artificial agent on behalf of a consumer. In this respect, an amendment of the E-Commerce Directive seems needed. Similarly, the Consumer Rights Directive43 and other European directives awarding the consumer a right of withdrawal should be amended in such a way that the information pertaining to the right of withdrawal is sent to the consumer’s personal account instead of to the account used by the artificial agent acting on behalf of the consumer.

43 Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights [2011] OJ L304/64.

Recommended publications