<<

Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session Forum: UN Commission on Science and Technology for Development

Issue: Measures to ensure safety and transparency in the development and use of

Student Officer: Kitty Tseng

Position: Deputy Chair

Introduction

From self-driving cars to robotic violinists, artificial intelligence (AI) is progressing at an unprecedented rate. Recent years have seen significant breakthroughs in the field with improved facial and speech recognition, data analysis, self learning algorithms, and autonomous robotics. Nonetheless, AI is more integrated in our lives than you think— whether it be recommended friends on Instagram, spam filters on Gmail, or predictive searches on Google.

The world is headed to rely on AI to make decisions about our lives, choices, and interactions in the real world. However, a growing number of critics and experts have expressed their concerns regarding the risks of AI to long-term safety and data security. Some worry that malfunction and incompetence is bound to plague AI machines; others worry that the lack of transparency and insufficient data protection erodes the privacy and control of our data. Artificial and improving malware, boasting the limitless power of artificial intelligence machines, also poses a threat to the real world. With AI being inherently hard to monitor and regulate, how does one ensure the safety and transparency in the development and use of artificial intelligence?

Definition of Key Terms

Artificial Intelligence (AI)

Artificial intelligence is an ever-evolving term describing machine intelligence to learn, reason, and self-correct. Examples include apps and programs, autonomous vehicles, and humanoid robots.

AI Takeover

During an AI takeover, artificial intelligence dominates human intelligence and strips the human species of their control of the real world. Potential forms of an AI takeover include the automation of the workforce and robot uprisings.

Ransomware

Also known as cryptoviral extortion, ransomware is a type of malware designed to encrypt a victim’s files unless an online ransom is wired to restore access.

Background Guide◆ Page 1 of 7 Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session History

Development of Artificial Intelligence

Artificial Intelligence was first used during the Second World War when British computer scientist Alan Turing developed the Bombe machine to decipher German secret codes. A decade later, Ferranti Mark 1 was made to launch the world’s first artificial intelligence program— one capable of beating an amateur in checkers. Soon, scientist Kato of Waseda University developed the first full-scale AI robot WABOT-1. Thenceforth, the globe saw an increase in research on algorithms to code for acquiring information and reaching conclusions.

Caption #1: The world’s first AI Robot WABOT-1 (left) and the succeeding musical robot WABOT-2 (right).

Artificial Intelligence Today

In the past half-century, three factors have contributed to the growth of the artificial intelligence field: advances of mathematical tools, understanding in machine and deep learning, and the explosion of available data. The above is complemented by a so-called “artificial intelligence arms race”, conceived to be a multilateral competition for the best AI technology.

Today, intelligent machine systems increase our accuracy and efficiency, AI is used to design art, conduct research, navigate transportation, and provide translations. It is even used notably for the assessment of health-related data, detection of fraud and impostor, and screening for medical defects and other conditions. However, listed are some of the many incidents encapsulating AI misuse:

Background Guide◆ Page 2 of 7 Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session

Caption #2: Funding in AI worldwide as of March 2019.

AI Image Misrecognition

In 2015, Google debuted its new image recognition feature. The feature was designed to recognize people, objects, and places with AI and neural network technology. However, one user posted screenshots of Google labelling two black people as “gorillas”.

AI Chatter Bot and Racism

In 2016, Tay— an artificial intelligence chatter bot released by the Microsoft Corporation— tweeted about racism, the Holocaust, genocide, and more within 24 hours from its launch. Tay was designed to learn language patterns through “casual and playful conversation” with its users. It is believed that Tay’s tweets were parroting offensive statements of her users.

Boston Dynamics Robot Blooper

In 2017, Dynamics debuted its humanoid robot Atlas at the Congress of Future Scientists and Technologists in a demo with another robot. Atlas is trained to leap over a log and leap up staircases without breaking pace.

Autonomous Car Crash

In 2018, Uber autonomous SUV struck and killed pedestrian Elain Herzberg under self- driving mode. Uber discovered that its self-driving software did not activate its automatic emergency braking system even after detecting the pedestrian.

Background Guide◆ Page 3 of 7 Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session

Key Issues

Machine malfunctions

In spite of all recent advances in artificial intelligence, many machines still fail of malfunction or incompetence. Real-world problems hold a degree of complexity that programmers struggle to cover entirely in their codes. Sensors, too, may fail to pick up a call to action, or an algorithm may fail to comprehend sudden situations presented.

Lack of transparency

The engines of AI systems are comprised of interconnected networks of neural nodes. As efficient as these systems are, they only allow us to see the input and the output; these machines cannot indicate their underlying reasoning or supporting data for decision made. This emerges as a prominent risk when we rely on machines to make military or medical decisions but are unable to trace back to the data to verify the decision.

Insufficient data protection

The more data an AI system consumes, the better their algorithms are to identify and respond to patterns. Their algorithms can only function when fed with massive amount of data, aiding them to learn correctly and predict the next step. This inevitably entails the processing of customer and private data. Yet when it comes to machine improvement, nothing’s ever accurate enough. To that end, experts worry about the implications of data consumption, possibly processing more data than intended.

Artificial superintelligence

As we delve deeper into the field of artificial intelligence systems, we are closer to creating machines that potentially surpass human performance in all domains. This presents the AI control problem: how are we to build a superintelligence agent that helps its creators without entailing harm? If the pattern continues, AI could potentially persuade humans to alter their behaviors or block its creators from interference. As explains, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can be used for something else.”

Threat of malware

Parallel to the development of artificial intelligence stands that of ransomware and viruses. Symantec Technologies estimates that in the near future, the mass distribution malware and ransomware would take minutes, if not seconds, to complete. This, in turn, poses a threat to the operation of AI machines. If ransomware ever reaches, for instance, algorithms coding for autonomous cars or self- operating medical equipment, the consequences are inconceivable.

Background Guide◆ Page 4 of 7 Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session Major Parties Involved and Their Views

China

Today, China stands at the forefront of AI research and development. The nation’s AI research has increased notably in quality and frequency with the help of the government. In July of 2017, China released its “New Generation Artificial Development Plan”, investing more than USD$150 billion into securing its position as leading AI power by 2030. Artificial Intelligence is also noted to be a critical component of China’s Fourth Industrial Revolution— the AI Revolution— and the AI Arms Race.

Russia

Russia has made AI development one of its many national priorities. In 2018, Russia doubled its investment in artificial intelligence; in October of 2019, Putin published the “National Strategy for the Development of Artificial Intelligence”, laying out its long term agenda for advancing Russia’s standing in the AI field.

Singapore

Singapore is one of the few countries adopting a “human-centric”, ethical approach to the sustainable use of AI. In addition to its S$150 million investment, the nation’s AI governance framework prioritizes data management and stresses transparency and fairness.

United States of America

Recognizing the potential benefits of AI, the U.S. government has invested more than fifty billion in AI start-ups and expertise research; more than forty states across the United States now actively employ AI in marketing, financing, healthcare, and transportation. In 2019, Executive Order 13859 pronounced the American AI Initiative, promoting AI to protect national interests, security, and values.

United Kingdom

The United Kingdom leads the ethical use of AI. The nation’s Center of Data Ethics and the Office of AI are two of the world’s first advisory organizations aiding the government, specializing with issues of governance and implementation of AI.

Timeline of Relevant Resolutions, Treaties and Events

Date Description of event

Political Consulting Company Cambridge Analytica exploited data from December 2015 Facebook to tamper with the 2016 US Presidential Election. It utilized algorithms capable of releasing posts to spread fake news and undermine political dissent.

Background Guide◆ Page 5 of 7 Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session

Microsoft Chatbot Tay tweeted extremist, racist, and anti-semitist comments on March 2016 Twitter as a result of real-time user learning.

With a Chinese facial-recognition software designed to catch jaywalkers, a 22 November 2018 famous businesswoman was wrongly shamed for jaywalking after her face on a public bus advertisement was caught on surveillance.

IMB-developed Robot Watson was reported by IBM Watson Health’s deputy July 2018 chief for recommending incorrect cancer treatments that encompass critical and potentially fatal repercussions.

Nearly three years from the first autonomous vehicle accident, Jeremy Banner 17 March 2019 died in a crash while using Tesla's Autopilot advanced driver assistance system.

Evaluation of Previous Attempts to Resolve the Issue

OECD Principles on Artificial Intelligence

The OECD Principles on AI outlines principles for the responsible use of AI-- of which include responsible disclosure and appropriate safeguards-- and recommends governments to remain supportive of AI research and investment. Although non-binding, these principles demonstrated the first global network in addressing the ethical and practical repercussions of AI integration. 42 countries signed the accord, but many major actors such as China have yet to do so.

AAAI Presidential Panel on Long-Term AI Futures

In 2009, the Association for the Advancement of Artificial Intelligence assembled a panel of leading experts to inspect "the value of formulating guidelines for guiding research and of creating policies that might constrain or bias the behaviors of autonomous and semi-autonomous systems so as to address concerns.” The panel discusses concerns surrounding AI, some of which include the loss of human control of AI and the social changes that come with competent AI technology. Although the panel was able to formulate topics of caution and recommended solutions, the panel was not able to carry through its solutions. Together, the experts also ruled out the need to halt AI research.

Possible Solutions

With inherent risks and consequences AI entails, government regulation may be a critical step to ensure the safety and transparency of the development and use of AI. Regulations may address the many controversial uses of AI— AI-enabled weaponry and cyberweapons, supplements of medical facilities, and more— or the protection of private, mass data consumed for AI research. However, one must take note that regulations are slow and political instruments; when applied to evolving fields like AI, regulations may

Background Guide◆ Page 6 of 7 Concordia International School Shanghai Model United Nations ◆ Eleventh Annual Session stifle innovation and deter its potential benefits. As such, regulations should seek a balance between restriction and indulgence.

Bibliography

“AI Policy - China - Future of Life Institute.” Future of Life Institute, www.futureoflife.org/ai-policy- china/ . Accessed 11 Nov. 2019.

Ai, Mind. “Lack of Transparency Could Be AI’s Fatal Flaw.” Medium, Mind AI, 29 Oct. 2018, www.medium.com/mind-ai/lack-of-transparency-could-be-ais-fatal-flaw-7c33b855928c Accessed 10 Nov. 2019.

“Benefits & Risks of Artificial Intelligence - Future of Life Institute.” Future of Life Institute, www.futureoflife.org/background/benefits-risks-of-artificial-intelligence/. Accessed 10 Nov. 2019.

OECD Principles on Artificial Intelligence - Organisation for Economic Co-Operation and Development. www.oecd.org/going-digital/ai/principles/. Accessed 11 Nov. 2019.

Panel-Note.pdf. https://www.aaai.org/Organization/Panel/panel-note.pdf Accessed 11 Nov. 2019.

Should We Fear Artificial Superintelligence? https://interestingengineering.com/should-we-fear- artificial-superintelligence. Accessed 10 Nov. 2019.

Why Uber’s Self-Driving Car Killed a Pedestrian - The Economist Explains. https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a- pedestrian Accessed 10 Nov. 2019.

Background Guide◆ Page 7 of 7