Michigan Technology Law Review Article 2 2021 How Can I Tell if My Algorithm Was Reasonable? Karni A. Chagal-Feferkorn University of Haifa Faculty of Law; University of Ottawa and the University of Ottawa Centre for Law, Technology and Society Follow this and additional works at: https://repository.law.umich.edu/mtlr Part of the Law and Society Commons, Science and Technology Law Commons, and the Torts Commons Recommended Citation Karni A. Chagal-Feferkorn, How Can I Tell if My Algorithm Was Reasonable?, 27 MICH. TECH. L. REV. 213 (2021). Available at: https://repository.law.umich.edu/mtlr/vol27/iss2/2 This Article is brought to you for free and open access by the Journals at University of Michigan Law School Scholarship Repository. It has been accepted for inclusion in Michigan Technology Law Review by an authorized editor of University of Michigan Law School Scholarship Repository. For more information, please contact
[email protected]. HOW CAN I TELL IF MY ALGORITHM WAS REASONABLE? Karni A. Chagal-Feferkorn* Abstract Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that—in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred to as artificial intelligence (AI) systems, “smart robots,” or “autonomous machines”) can still cause damage. When determining the liability of a human tortfeasor causing damage, the applicable legal framework is generally that of negligence.