: An Analysis of Scientific and Societal Perception

A Research Paper submitted to the Department of Engineering and Society

Presented to the Faculty of the School of Engineering and Applied Science University of Virginia • Charlottesville, Virginia

In Partial Fulfillment of the Requirements for the Degree Bachelor of Science, School of Engineering

Joseph Laux Spring, 2020

On my honor as a University Student, I have neither given nor received unauthorized aid on this assignment as defined by the Honor Guidelines for Thesis-Related Assignments

Artificial Intelligence: An Analysis of Scientific and Societal Perception

Overview of Artificial Intelligence in Society

Is the idea of artificial intelligence (AI) taking over society as we know it a plausible reality? Currently, the world (particularly developed countries) is at the forefront of a technological revolution involving and artificial intelligence. The technology is, in many ways, in its infancy and many people (including scientists and non-scientists alike) have conflicting visions of the future of AI. On one hand it is extremely beneficial in automating tasks and providing means of efficiency; on the contrary, it presents ethical and safety concerns due primarily to the idea that a non-human entity could achieve an intellect beyond a human level through AI. The STS theory of co-production is used to frame this problem. In this instance, science and social order are being co-produced as AI emerges. The development of the technology is held in check due to the array of outspoken fears in regards to the potential negative effects associated with it. Through the utilization of this theory, this paper will analyze the perception of AI in our society and how its implementation may affect life in the future.

Research Question and Methods Utilized

This paper focuses primarily on answering the following question: how does the varying level of perception in the current state of artificial intelligence shape societal and technological interactions? To answer this question sufficiently, documentary research is conducted. In addition to this data, an interview with Professor Lu Feng of the Computer Science department at the University of Virginia was completed. First, the documentary research involves looking into different resources that explore the topic further, including primary and secondary sources. This research was based primarily on search keywords including, but not limited to: “artificial intelligence in society”, “perception of artificial intelligence”, and “current state of artificial

2 intelligence”. One specific example is a paper written by Nate Soares and Benva Fallenstein addressing AI, while developing a “technical agenda that discusses three broad categories of research where we think foundational research today could make it easier in the future to develop super-intelligent systems that are reliably aligned with human interests” (Soares & Fallenstein,

2017). This paper is utilized as an illustration of experts in the field not only expressing concern for the future of the intersection of technology and human interests, but also by proposing possible solutions and mitigations for the problems that may occur as a result of super- intelligence systems emerging. All the data and research collected from these sources is organized in this paper by topic, specifically in a manner that first lays a foundation of AI, followed by generating an understanding and application of AI, and concluding with a comparison of AI to other technologies. Following this, an interview with Professor Lu Feng was completed on February 19, 2020. Professor Feng provides more personal insight and expertise into this research project as she teaches an AI class at the University of Virginia and conducts her own research in the field. Sample questions of the interview include: “What is your opinion of the current state of AI?” and “Do you foresee a future where AI dangerously exhibits intelligence smarter than a human?” Ultimately, these questions and the perception Professor

Feng possesses help better shape the research as a whole and while providing a stronger backbone for the presented arguments.

What is Artificial Intelligence?

AI presents itself as a double-edged sword, of sorts, in society today. On one hand, the emerging technology is practical in a variety of disciplines including, but not limited to: military, health care, finance, and autonomous vehicles. On the contrary, scientists and civilians alike have begun to voice a variety of concerns for the future of the technology.

3

To fully understand the potential impacts of AI, it is important to first comprehend the topic itself. To put it simply, Jeremy Achin, CEO of DataRobot, defines AI as “a computer system that is able to perform tasks that ordinarily require human intelligence. These artificial intelligence systems are powered by machine learning” (Achin, 2020). This technology is fundamentally reshaping the infrastructure that society has adapted to. Since a computer is oftentimes able to complete tasks quicker and more reliably over humans, this automation of tasks in a more efficient way has incentivized many businesses and corporations to begin to rely more heavily on AI to complete tasks. In the long run, this means human workers are losing jobs.

The Associated Press published an article on Market Watch indicating that “Over 30 Million

U.S. Workers will lose their jobs because of AI” (AP, 2019). The article cites Americans who

“hold jobs with ‘high exposure’ to automation – meaning at least 70 percent of their tasks could soon be replaced by machines using current technology” are at the highest risk. This includes cooks, waiters, and others in food services, as well as truck drivers and clerical office workers.

The general consensus on the timeline of such events is somewhere between the next 2 or 20 years – ultimately implying that it’s not a question of whether or not it will happen, but rather when it will happen. With this said, it is important for society to understand exactly what AI is and how it may affect areas of the world before the science advances to such a state that includes it. This way, adaptations can be made without necessarily resorting to job displacement and the benefits of the technology can be utilized concurrently.

Secondly, the general idea of AI or computers becoming “smarter” than human intelligence is understandably concerning for many people. This stigma is primarily due to the portrayals in popular culture and the overall fear of the unknown that a situation such as this presents for the future. There is a stigma slowly developing in society suggesting a

4 possibility of potential AI takeover. In 2017, The Independent published an article called

“Facebook’s artificial intelligence shut down after they start talking to each other in their own language” (Griffin, 2017). The title alone captures the attention of any reader. A large and influential software company developing robots to the point they are no longer controllable? In the eyes of the article’s audience, an event such as this could be the precursor to a much more serious issue where two AI entities do more than simply talk to each other in their own language that humans cannot understand. This paper will ultimately explore options of mitigating problems like this, while addressing the fact that there is a varying level of perception of the technology and its capabilities.

Introduction of the Co-Production Framework

The topic of AI perception in society is viewed and analyzed through the STS perspective of co-production. Shelia Jasanoff defines co-production as: “Increasingly, the realities of human experience emerge as the joint achievements of scientific, technical, and social enterprise; science and society, in a word, are co-produced, each underwriting the other’s existence”

(Jasanoff, 2006). In this specific case, the developing AI technologies are held in check by outspoken stakeholders. These may include lawmakers, individuals who may have their careers affected as a result of the introduction of new AI technology in society, and people who are uncomfortable with the phenomenon of AI takeover. The framework of co-production sets up the problem appropriately, but it comes with its own criticisms or drawbacks. In particular, co- production on its own is a broad framework and generally is not powerful enough to fully describe any sociotechnical phenomenon, including AI implementation and perception in society.

5

With this said, analyzing other successful cases of the utilization of co-production to frame a problem can be of assistance to this issue. For example, the Weather and Climate

Information Services for Africa (WISER) manual uses co-production to “bring together the producers of weather and climate information with those who use the information to make decisions, often using intermediaries to help connect these actors, in order to solve a problem where weather and climate information is relevant” Carter, S., Steynor, A., Vincent, K., Visman,

E., and Waagsaether, K. (2019). Ultimately, WISER was able to successfully use co-production to frame this problem and some key takeaways and benefits from this approach include: co- production bringing people together to create synergies and opportunities for resource sharing and creative thinking, ensuring a wider reach and impact through multiple communication channels, and engaging intermediaries and users while improving the tailoring of communication to specific audiences. While this is tailored particularly to weather and climate in Africa and the direct stakeholders involved with this topic, this methodology and specific implementation of co- production can be redirected and utilized in the AI example. The scientists studying the weather and climate of Africa are analogous to the scientists developing AI applications; the people who use weather forecasts and information to make decisions are analogous to the people who are affected by AI and its implementation. Ultimately, this example assists the analysis of the question of AI growing in society in its similarity of its utilization of co-production.

Answering the Research Question and Applying the Framework to AI

The varying level of perception of AI found in society has yielded an outline of results that serve to answer the research question. Society must work together to co-produce the final AI technology that is implemented into the world. To reiterate, the varying levels of perception of how AI is implemented in society stems from both fears of the unknown of how AI will impact

6 our future and the potential practicality of the application of the technology. Overall, the general public’s uncertainty and weariness in the emerging state of AI creates a roadblock for the progression of the technology that the more optimistic, technological community (AI researchers) hopes to achieve. While the AI research community is pushing for accelerated development in their technology, a large portion of other individuals are uncomfortable with this, despite its potential positive impacts in making society more autonomous. Ultimately, in order to successfully proceed into a future with AI, the need to work together, in a co-production framework and mindset, to create a safety minded approach and be on the same page is paramount. This must be done while distinguishing bias with the reality of the current state of the technology. Only then can it truly advance in a practical manner that maximizes benefits and alleviates concerns of a technological, AI takeover of the human race as it is known today.

In order to accurately scope the research question properly, it is important to understand the current public perception of AI. Understanding why the public has a certain perception in the first place creates a foundation that is utilized in determining how the general public and science communities can interact for the future. A research paper titled “Long-Term Trends in the Public

Perception of Artificial Intelligence” (Fast, E., & Horvitz, E, 2016) studies the development of the societal perception of AI in both pessimistic and optimistic lights. The paper utilizes New

York Times data, as well as Reddit data, to fully support the claims being made.

The New York Times data analyzed the quantity of articles that mention AI and specific

AI keywords over time. One specific outcome of this study was the parallel between the increase in frequency of the key words “Loss of control” and “AI in fiction” over the past 30 years. Each of these specific studies outlined an increase in the number of articles containing these specific key words in a strikingly similar manner. This suggests that AI in popular culture and science

7 fiction is a key contributor to the negative stigma and pessimism surrounding AI. The Reddit study solely analyzed AI-related comments that mention the loss of control of AI. The outcome suggests that the fear of loss of control of AI and AI related systems in the future has been increasing over recent years. These perceptions are attributed to the “general public” side of the co-production framework, outlining the difference in how people in general have differing opinions from scientists who develop the technology. Overall, the results from the research conducted conclude an overall increase of the discussion of AI in the last 10 years, with a majority of the discussion being optimistic. With this being said, the fear of loss of control of AI has simultaneously increased in recent years.

Following a foundation of the history and trends of the public perception of AI comes a need to understand it and apply it. An assortment of studies compiled together in a Medium,

GoodAI article titled “Understand the public perception of AI” (GoodAI, 2019) begins the discussion. This article reinforces the idea of the need for the general public to “accept the use of any technology” before it can flourish and be successfully implemented. The article additionally compiles data which analyzes public perceptions versus reality in pre-existing cases.

Specifically, a survey was done asking 26,489 people across 28 different countries to name what they thought was the most pressing global issue contributing to death. The number one answer from the survey was terrorism, even though this only accounts for 0.06 % of deaths globally, while health related diseases account for over 50% of all deaths worldwide on an annual basis.

This example is made to outline the bias present within the general public as a whole and how a wrong perception can create friction that inhibits progress. The parallel to the development of AI here is that it is paramount to ensure that the bias associated with AI in the public’s perception is

8 limited in order to practically advance the technology in parallel with researchers and scientists through the co-production lens.

Any new emerging technology has the ability to spark both excitement and concerns in society. AI is a technology that is no different. Previous technological advancements can be viewed in a similar light to how AI is emerging while shedding some light on how the varying level of perception of its emergence can be tackled. The Royal Society completed a project,

“Portrayals and perceptions of AI and why they matter” (Portrayals and perceptions of AI and why they matter, 2018). The audience of this project is described as the “English-speaking west, with a particular focus on the UK.” In this research, the authors completed a case study on a perspective on nuclear power and how it was implemented in society. This case study is incredibly useful to this paper’s AI research in that its emergence is similar. Nuclear power is a technology which presents society with a practical use (generating electricity), while simultaneously raising serious concerns (nuclear meltdowns). This sociotechnical phenomenon utilizes a similar framework, co-production, that is seen with the emergence of AI in that the technology and the people who voice concerns against it work together to ultimately produce the final product. Obviously, the implementation of nuclear power is different than that of AI, but ultimately the Royal Society’s project cites a few lessons that AI could learn from the nuclear perspective. One of the key lessons cited is “Narratives of extreme fear can have potentially beneficial outcomes, for instance in ensuring safety concerns are considered at an early stage in research, regulation and implementation of a technology.” In the case of nuclear power, the

Royal Society argues “both the ‘mushroom cloud’ image and the invisible power of radiation contributed to the new dystopian visions about post-nuclear futures, and the narratives about the safety of such technologies.” While the dystopian “image” for AI is a little less concrete and is

9 more based off of science fiction rather than previously occurring disasters, the co-production result in alleviating safety concerns in early stages of research is paramount to the future of AI.

Outside of documentary research compiled from previous studies on the topic of AI and the perception of AI, an interview was conducted with University of Virginia AI Professor and

Research Scientist, Lu Feng (Feng, 2020). In the interview, a variety of questions regarding

Professor Feng’s opinion on AI and how the perception of AI shapes technological development were asked. As a research professional in the field, Professor Feng is quite optimistic about the future of AI and how it will impact society. She cites a large incoming number of PhD students in AI and ML (machine learning) areas, a plentiful number of new job opportunities, and a high demand for AI work in the market as reasons for the future of the technology to be bright.

Secondly, based on her research and experience, she says AI is not necessarily a new hindrance to society that can potentially negatively change society as it is known today. While it is expected that AI will restructure the day-to-day lives of many people, and may even put some out of jobs, the revolution of AI is comparable to any other technological/industrial revolution.

For example, when the railroads were first invented in the 1800s, they may have put horse carrier businesses in jeopardy. Ultimately, however, society has advanced for the greater good since that technological revolution. Professor Feng reiterated that the same can be said about AI; in the long run, the good will outweigh the bad.

Overall, Professor Feng’s comments served as a key contribution to the co-production framework argument presented in this paper in that they reiterate the practicality of the technology while dissolving any fears of how society may respond to the revolution as a whole.

Her experience in the field reassures the optimism of the practicality of AI in the world and the future of the world. In the end, in order to progress in a practical manner, it is vital to increase

10 the overall general understanding of what AI technology is attempting to accomplish as this not only distinguishes fear of the unknown, but also allows for a more seamless transition into the new society that is the outcome of the new technological revolution.

The research conducted for this experiment, while insightful to answering the research question presented in the paper, is somewhat limited. Most of the research data was obtained in a small window of time (a few months) during the end of the fall 2019 semester and the beginning of the spring 2020 semester. More interviews, particularly from people who may fear that their job will be overtaken by AI, would have nicely supplemented the message of the paper while additionally complementing the Lu Feng interview. Having multiple interviews of, presumably conflicting details (optimistic vs. pessimistic), would furthermore reiterate the co-production framework and the current state of how the technological development of AI is dependent on both what researchers and scientists create and the people who are affected by the creations.

Finally, in order to further expand on this research, it would be interesting to conduct a study on how successfully the implementation of AI technology in society affects daily life. This research can be conducted from a variety of different perspectives. This may include the AI researchers themselves, sociologists, and industrial companies who are currently transitioning human roles to AI roles. For example, many fast food restaurants are beginning to introduce automated kiosks which take your order, as opposed to a human worker who greets you. This is not necessarily a great example of “AI”, but nevertheless it is a technology replacing human work in a similar manner. A study on the impacts and opinions of the success of such technology would be interesting to continue this study. Most of the research conducted for this paper was based on previous studies and interviews from professionals in the field, but a project involving a

11 survey of the success of how well AI is being implemented in society would be crucial in determining if the current path of the technology is the right one.

Conclusion

The varying levels of perception of the current state of artificial intelligence shape technological and societal interactions in that people and opinions from both of these communities work together to “co-produce” a safe, stable, and practical iteration of AI technology. In order to maximize the success of integration of a new technology in society, a certain level of understanding of the technology is necessary. The research conducted in this paper signifies the pressing need to ensure that society is well informed and on the same page going into the new AI technological revolution. Once this occurs, the technology will evolve into a useful state that people in the world will utilize and appreciate.

12

References

Achin (2020). What Is Artificial Intelligence | Artificial Intelligence Wiki. DataRobot. Retrieved

January 29, 2020, from https://www.datarobot.com/wiki/artificial-intelligence/

Carter, S., Steynor, A., Vincent, K., Visman, E., and Waagsaether, K. (2019) ‘Co-production of

African weather and climate services’. Manual, Cape Town: Future Climate for Africa and

Weather and Climate Information Services for Africa

(https://futureclimateafrica.org/coproduction-manual)

Facebook’s artificial intelligence robots shut down after they start talking to each other in their own

language | The Independent. (n.d.). Retrieved October 3, 2019, from

https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-

ai-chatbot-new-language-research--google-a7869706.html

Fast, E., & Horvitz, E. (2016). Long-Term Trends in the Public Perception of Artificial Intelligence.

ArXiv:1609.04904 [Cs]. http://arxiv.org/abs/1609.04904

Feng, L. (2020, Feb 19). Personal Interview

GoodAI. (2019, February 18). Understanding the public perception of AI. Medium.

https://medium.com/goodai-news/understanding-the-public-perception-of-ai-a14b0e6b6154

Jasanoff, S. (2006). Ordering knowledge, ordering society. In States of Knowledge: The Co-

production of Science and the Social Order.

Over 30 million U.S. workers will lose their jobs because of AI - MarketWatch. (2019). Retrieved

January 29, 2020, from https://www.marketwatch.com/story/ai-is-set-to-replace-36-million-us-

workers-2019-01-24

Soares, N., & Fallenstein, B. (2017). Agent Foundations for Aligning Machine Intelligence with

Human Interests: A Technical Research Agenda. In V. Callaghan, J. Miller, R. Yampolskiy, & S.

13

Armstrong (Eds.), The Technological Singularity (pp. 103–125). https://doi.org/10.1007/978-3-

662-54033-6_5

Portrayals and perceptions of AI and why they matter. (2018). 28.

14

Appendix

Interview Questions:

- What is your opinion of the current state of AI (worried about future? or optimistic?)

- Do you foresee a future where AI dangerously exhibits intelligence smarter than a human?

- Is there a broader concern with how the general public views AI (based off science fiction, pop

culture, etc.) with how scientists who are developing new AI view it?

- If so, what can be done to better mitigate this discrepancy?

- Is it reasonable to believe that AI will restructure society as we know it in the future?

15