<<

University of Hawai´i at Mānoa

BlackBox.Academy: An Educational Website, from Racial Bias

and Digital Panopticism to the Technological Open

Mari Martinez

Performance Studies, MA

Dr. Markus Wessendorf (Committee Chair)

Dr. Jonathan Goldberg-Hiller

Dr. Jason Leigh

May 4, 2021

Martinez, Mari Page 2 of 65

Contents

Abstract 3

What is the Black Box 3

Racial Bias 5

The Algorithm 11

The 17

Mis/Disinfo 29

The Open 43

Martinez, Mari Page 3 of 65

ABSTRACT

BlackBox.Academy is an educational website which helps people with minimal or greater technological backgrounds learn important topics in technology, ranging from racial bias to disparity caused by algorithms, from to digital panopticism, to mis- and , even ontological questions of technology. The goal is to educate a wide variety of people from multiple backgrounds by breaking down concepts without the use of technological jargon and academic speak. This is done with embedded video clips, hyperlinks, and concepts that gradually build on one another. The website format allows for real time changes and up-to-date information. This PDF version is the written part of a more elaborate, complex, and interactive website that can be found at https://blackbox.academy.

What is the Black Box?

Technology is this monolithic word that encompasses everything from pencils to medicine, and from toaster ovens to artificial intelligence. So, what do I mean when I say technology? And what's all this talk about a Black Box? The technology that I'm talking about is modern technology, the everyday stuff that use – computers, smart phones, software, the things we have invited into our home to make our lives easier. I will speak about artificial intelligence and algorithms, but I'll let you know when I do. The black box metaphor started way back in the days of cybernetics and refers to “a system we can only observe the inputs and outputs, but not the internal workings” (Card). Martinez, Mari Page 4 of 65

So, what's this Black Box all about? Think about technology having a background and a foreground. The foreground is what you interact with. The app you click on, the screen that you see, that sort of thing. The background is all the stuff that's working behind the scenes to make that app work, or that screen appear, or your computer work. It's all the sorcery and wizardry that most people don't understand (or care to), that makes all the bleeps and bloops, and all the lights happen and the technology that you're using for work. The Black Box is the background.

When I talk about it, if necessary, I will talk about it in those terms: the foreground (the stuff you interact with), and the background (the stuff you don't see that makes it work, everything inside the box) or the black box. This text looks at the black box through a performative lens. What does that mean? Richard Schechner sums it up best in his book

Performance Studies: An Introduction:

“(DL) Austin notes, "To say something is to do something." In uttering certain sentences

people perform acts. Promises, bets, curses, contracts, and judgments do not describe or

represent actions: they are actions. Performatives are an integral part of "real life." As

many have found out too late, even if the heart says "no," once the tongue says "yes" the

performative binds.”1

The same is true with technology; but how does what we say with our technology, what we program our technology to say and do, and what our technology tells others about us, influence and impact our world? It is one thing to think about these things in the foreground of technology, who we text, what we tweet, etc. – but what about the background? How does the information we input into the technological black box impact these actions? How does “to say

1 Richard Schechner. Performance Studies: An Introduction. 4th ed., Routledge, 2020. Martinez, Mari Page 5 of 65 something is to do something” impact us if we can’t see who or what is uttering the words that influence our actions?

RACIAL BIAS

Why is this important? Why does it matter if we look inside the black box? You may be asking yourself, why does it matter how these things work? Why does what's inside of them matter? Do I really need to know how my TV works in order to watch it? Does it really matter what's inside my smartphone or who programmed TikTok, , or as long as I can access them?

Many of us have used public restrooms. Many of us have used public restrooms with automatic sensors on the sinks, the ones where you put your hands in front of the light sensors and it activates the water to wash your hands. My partner and I have two completely different experiences when we encounter these, and it is no accident. I also have different experiences depending on the time of year. As a Mexican woman, if it is the middle of summer, and my skin is much darker, it is almost impossible for me to activate one. I ALWAYS have to use the palm of my hands. My partner, who is white, has little difficulty activating the sensors.

I know what you're thinking, there's no way that the sink I'm trying to use has any sort of racial bias. This sink can't possibly be racist, it's in inanimate object, a harmless piece of technology, dare I say, neutral. But is it? In 2015, this idea made headlines when a viral video made its rounds on after two friends at DragonCon were in a Marriott hotel bathroom (Plenke). One white, who had no problem making the light sensor work and triggering the faucet, and the other, a black man, who could not get the sensor to activate. The video was Martinez, Mari Page 6 of 65 titled "Whites Only," a throw-back to Jim Crow segregation and raised the question of whether or not we need a closer look at who is behind, and what is inside our technology.

“Whites Only?” Post by TeejMaximus September 2, 2015

Retrieved from YouTube in December 2020

This may seem frivolous, it's just a soap dispenser, but it becomes even more important when these technologies are unable to recognize people. No longer a thing of the future, every major city in the and countries all over the world use biometrics; from tracking citizens on city streets to using it in law enforcement.

However, these technologies have been shown, time and again, to not work, and they have high inaccuracies for people of color and women. This technology has been massively rolled out worldwide in countries like the United States, the , and .

Facial Recognition

Biometrics: These are physical characteristics like fingerprints, facial recognition, voice,

and eye or retinal recognition that can be used to identify who you are. They can be used

to unlock your phone, to access bank accounts, open doors, even identify you for

(Gillis).

Facial recognition is being widely rolled out in major cities across the world with little oversight. Many studies have shown that facial recognition technology is not only biased, but Martinez, Mari Page 7 of 65 that in many circumstances it inaccurately identifies people of color, especially women (Al

Jazeera, Rae).

(AlJazeeraEnglish, & Rae, A. (2019, July 03). Do Biometrics Protect or Compromise Our Security? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020)

Biometric Technology & Vulnerable Populations

In areas where people have minimal rights, like refugee camps, biometric technology is being used to collect data. How should this technology be used in these ? Stephanie Hare discusses whether or not it is ethical to use it with groups of people who have limited rights or are not in a position to refuse the use of this technology and collection of such intimate data.

Stephanie Hare: “Ethics of Biometrics & Vulnerable Populations”

AlJazeeraEnglish, & Rae, A. (2019, July 3). Do Biometrics Protect or Compromise Our Security? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020.

How is this possible?

How does technology show racial bias? These are just a few examples of the more obvious racial bias that are present in technology. We can begin to see how these technologies directly impact how we behave (or perform) in the real world, but who or what is dictating our behavior from inside the black box? How did this racial bias get there, or maybe you already know, or have some idea. When thinking about racial bias in technology we need to think about who is making the tech. Martinez, Mari Page 8 of 65

Who makes the technology?

The tech industry is predominantly male and white. This isn't to say that the technology is intentionally made not to recognize darker skin, this is to say that the people writing the code, the people making the product, testing the product, and doing quality control are predominantly white men. There aren't people of color to step in, to find racial blind spots, so racial bias gets programmed into the system. When a technology like facial recognition is being tested, if it is only being trained on white, male faces, a bias gets programmed (intentionally or not) into the system. Representation matters, from coding to quality control. Representation matters every step of the way, especially when that technology is helping to put people behind bars.

Bias in the Justice System

More and more, algorithms are being used in the criminal justice system, from sentencing to parole boards. It has been assumed that technology would help remove the bias from the system, to help bring a sense of neutrality, but is that what has happened? Bias in the criminal justice system is already apparent. However, with the implementation of algorithms to help the system, the technology that was intended to help the system has its own bias. These technologies are also created for efficiency and to expedite processes, this means issues like bias also become more efficient.

Another important question to consider, especially when it comes to the Justice System, is: Does not being able to question the algorithm or look inside the black box violate your right to Due Process? If we can’t see the algorithm because the company who owns it doesn’t have to Martinez, Mari Page 9 of 65 show it to us, how do we know we’re getting a fair trial by our peers? How do we know if our constitutional rights aren’t being violated?

Julia Angwin: Disparity in Error Rates VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved from YouTube on December 02, 2020.

COMPAS: Violation of Due Process?

AlJazeeraEnglish, & Rae, A. (2019, July 3). Can We Trust Algorithms? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020.

Glen Rodriguez: Lack of Transparency

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved from YouTube on December 2, 2020.

Digital Poorhouse

Virginia Eubanks, author of “Automating Technology: How High-Tech Tools Profile,

Police and Punish the Poor,” speaks on how our new automated systems are modern Digital Martinez, Mari Page 10 of 65

Poorhouses and the need to understand America’s history with race, and poverty to see how automated systems makes services difficult to access for those in need.

AlJazeeraEnglish, & Rae, A. (2019, July 3). Can We Trust Algorithms? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020.

In the following clip, she shares the how politics, social stigma and technology converge create these modern-day conditions.

CUNY TV75. (2018, January 21). Automating Inequality – Virginia Eubanks | The Open Mind [Video]. YouTube.

https://www.youtube.com/watch?v=Avxm7JYjk8M

DISPARITY INCREASES

As previously mentioned, because systems are made to be more efficient, systems that already have some disparity only become more efficient at disparity and discrimination. This can happen because discrimination is seen as a problem with the individual and not as systemic and structural (as explained in the top clip below). Unfortunately, the more efficient the system, the Martinez, Mari Page 11 of 65 easier it is for people to no longer be seen as human, but as commodities, as author and data scientist Cathy O’Neill explains (bottom clip below).

Virginia Eubanks: Discrimination built into the Algorithm

AlJazeeraEnglish, & Rae, A. (2019, July 3). Can we Trust algorithms? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020.

Cathy O’Neill: Algorithms Cause the Future

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved on December 2, 2020.

THE ALGORITHM

So, what is an algorithm, let alone the algorithm? Very simply, an algorithm is a set of instructions to perform or do a specific task. Think of it like a recipe. When people talk about the algorithm (like I will be doing in this section), they are referring to a data set, the set of data that an algorithm has collected.

As it was described to me, think of it like a baby brain that is set loose on the and is allowed to run free (thanks Jason, for this fantastic imagery) and collect information. That brain learns from the information that it collects, and over time it picks up the characteristics from its surroundings. Martinez, Mari Page 12 of 65

Cathy O’Neill (Author & Data Scientist) Explains an Algorithm

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved from YouTube on December 2, 2020.

This idea is also at the center of a much-contested debate: Is technology neutral? Neutral meaning, is it really as impartial or unbiased as we think it is? If this brain that started out free of input becomes racist because of what it learns on the internet, is the technology neutral?

Meike Zehlike (Data Scientist): Bias Was Always in the Data

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved from YouTube on December 2, 2020,

• Does it start with the racial bias of the programmer? Or the lack of diversity in those

programming it? Is it already starting out with some sort of bias before it goes out into

the world? Then the issue lies with the developers, or businesses creating the

algorithms. Or…

• Does the issue lie with the data it learns from its surroundings? Is it just an innocent

child and no different than nature versus nurture? In this case, the parents are the whole

of the internet, the world, you and I. We become just as responsible for its racism, its

bias, and ultimately Skynet.

• Or is it somewhere in between? Martinez, Mari Page 13 of 65

First, we must take a deeper look at what the algorithm is, and what role it plays in our lives. There are two main ways that the algorithm is used in our daily lives. The first one I will look at is automation (“What Is Automation?”), this is where the process of what was normally done by humans, is now mostly done by technology.

Automation in Our Lives

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Automation impacts almost every sector of our lives. With the ease of how this technology integrates into the workplace, we need to ask:

• How does automation impact the work force?

• Are jobs created at the same rate by which they are replaced by , robots, and

automated services?

• What kinds of jobs are at risk of automation, and who does this imply at higher risk of

job loss?

• What impact is it having on the economy?

Harry Cripps (UAW President): Automation means Job Loss

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Martinez, Mari Page 14 of 65

Molly Kinder: Women at Higher Risk for Job loss due to Automation.

PBSfrontline. (2019, December 02). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Jerry Kaplan: Automation is Driving force for Inequality

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Unfortunately, labor force disruption is a very real side effect of the efficiency of automated systems (Berkenfeld clip). It is critical that consideration be given to how we replace jobs and workforce that is lost. These disruptions have long-term effects and health risks that are more far-reaching than unemployment and ranging from higher risk of cardiovascular disease to intergenerational impacts such as how children perform academically (Wornell clip).

Steven Berkenfeld: Businesses look for Efficiency & Efficiency Means Job Loss

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Martinez, Mari Page 15 of 65

Nicholas Thompson: Labor Force Disruption Like We’ve Never Seen

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Emily Wornell: Health Risks & Long Term Effects of Automation Related Jobs Loss

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Artificial Intelligence

The other way algorithms are used in our lives is called AI, or artificial intelligence. This, as you can see, already has far-reaching impacts. When people think of AI, we tend to think of science fiction movies, but AI is much more practical, and in some cases much more nefarious than our science fiction wonderland. Remember that baby brain I talked about before? That's what I'm talking about here, I'm talking about a program that is making predictions based on a collection of data and learning while it does this. When we interact with technology, we leave behind traces of information, what some refer to as a Data Cloud, and that information tells the

AI a lot of information about us.

So, what sort of data is AI collecting? EVERYTHING. What can it learn from all the data it's collecting? ANYTHING. With that kind of information, what can AI predict about us? Martinez, Mari Page 16 of 65

Pedro Dominguez: The Data Cloud & Adapting Your World

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Michal Kosinski: Data Points, Predictions & Personality

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved from YouTube on December 2, 2020.

Brian Dalessandro: Behavior Predictions & Behavior Trails

VPROinternational. (2013, October. The Real Value of Your – VPRO documentary – 2017. Retrieved from YouTube on February 23, 2021.

Smart Appliances like Alexa and Google Home are collecting more and more intimate data on us

(McNamee and Webb clip). This data collection can have real-world consequences and impacts on things like applying for a bank loan (Ke Clip). We don’t know what data is being collected, or Martinez, Mari Page 17 of 65 how that data is being interpreted. We don’t know how that data is being used, and private companies don’t have to tell us.

Roger McNamee & Amy Webb: Smart Appliances

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Jiao Ke: Banks, Loans & Your Cell Phone Battery

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

In our ever-changing worlds, more of our lives are being spent online. What does this mean for our information, and the data being collected?

What is happening with all this data?

Who owns the data, do we own what we put out, or is it owned by the platform we use?

What does mean in a world where we put our lives online to be seen?

What effects does social media have on all of this?

THE PANOPTICON

The panopticon was first conceived by brothers Samuel and in the late

1700s. Samuel began writing to his brother Jeremy while he was stationed in St. Petersburg, Martinez, Mari Page 18 of 65

Russia, during his time in the Navy. He wrote to him about prisoners being constantly observed, an idea that Jeremy would later develop into what we understand today as the Panopticon. A circular building with a watchtower in the center where inmates of a (later he would develop this idea for schools, and governance) could potentially be observed at all times (Steadman). The idea was that if you know you are being watched, then you'll behave differently. Even if you are unable to see who is watching.

Panopticon Computer Model by Myles Zhang. Narration by Tamsin Morton

Retrieved from YouTube on December 2, 2020.

In 1975, philosopher revisited this idea and explored it more deeply for its more embedded authoritarian ideas. When Foucault saw the Panopticon, he believed the

Panopticon revealed or exposed four main things:

• First, he saw pervasive power. He saw a one-way power dynamic where those in power

had the power to see everything (“Foucault 2: Government Surveillance and Prison”).

• Second, he saw obscured power. There was no way for the prisoners to look back, so

there was no way for the prisoners to know who was watching them or why they were

being watched (“Foucault 2: Government Surveillance and Prison”).

• Third, he saw direct violence replaced with structural violence. Violence no longer had

to be carried out in a physical form, the building itself, just by being there was now its

own form of violence (“Foucault 2: Government Surveillance and Prison”). Martinez, Mari Page 19 of 65

• And last, structural violence was now profitable, and working towards profit was now

the only option. Because money was now being saved by the facilities to reduce the

number of "guards" or "watchers" in the Panopticon, then it no longer mattered why

people were inside the prison in the first place. You have obedient workers for a lower

cost so it becomes profit over reform (“Foucault 2: Government Surveillance &

Prison”).

However, Bentham ultimately wanted reform, and an ideal state where the Panopticon was no longer needed. Foucault saw a place where profitability outweighed reform. would ultimately set in, because it was a founding principle, and profit over people would win (Pease-Watkin). Today, this idea is explored further and even challenged when it comes to our digital world. We need to look at the impacts of Digital Panopticism and

Surveillance Capitalism (a new model introduced by Shoshana Zuboff) in our lives.

In their 2015 article about the digital panopticon in TechCruch, Arthur Chu says:

Foucault said you can build a prison without walls just by letting your prisoner

know he's always being watched. In real life in 2015, you can totally control

someone's behavior by training them to tweet or instagram every tiny thing they

do and see and see how many likes it gets... In real life in 2015, we can all be

each other's warden as member of the amorphous mob that hands out likes and

dislikes.

This was 6 years ago... Things have only gotten worse. The 2020 Imperva Bad Bot Report shows that almost 40% of all traffic on the internet is bots. As predicted (or designed), who or what is watching us may not be there at all but is still having a massive impact on our lives. Martinez, Mari Page 20 of 65

Bot: A software application that runs automated tasks (scripts) over the internet (“What

is a Bot”).

Stephanie Hare: Thinking We’re Being Watched Changes Our Behavior

AlJazeeraEnglish, & Rae, A. (2019, July 3). Do Biometrics Protect or Compromise Our Security? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020.

Stephanie Hare: Surveillance & Changing of Governments

AlJazeeraEnglish, & Rae, A. (2019, July 3). Do Biometrics Protect or Compromise Our Security? | All Hail the Algorithm. Retrieved from YouTube on December 2, 2020.

Digital Panopticism and Social Media

One question that keeps arising today is where social media fall into the world of digital panopticism. Often social media is called a reverse or inverted panopticon because we use social media to be the observer. In this scenario, we have the ability to not only gain power but affirm our actions. We control what we share, and now we have the ability to watch and look back at people in positions of power. But is this true? If we look at the dynamics of power that Foucault outlined for us, social media is not a reverse or inverted panopticon at all but simply a panopticon in the truest sense of the word.

First, we need to look at veillance. Veillance simply means to watch. There are different types of veillance, and each of those gives power to who is doing the watching. The watcher is Martinez, Mari Page 21 of 65 not a passive participant. They have a role and are actively involved. This is important to power dynamics. The most common types of veillance are:

• Surveillance. Sur- meaning over. Surveillance is oversight. This is what we normally

think of when we think of people or someone watching. The few watching the many.

This removes power (Mann).

• Souseveillance. Sous- meaning under. is undersight. This restores power

to the people. This is the many watching the few. Examples of this are body cams on

police and even people turning cameras on police during protests (Mann).

• Self or Autoveillance. Auto meaning self. Autoveillance is self-sight. Self, or

participatory surveillance where the people monitor each other, and themselves. This

both removes and restores power (Mann).

If we use Foucault's lens to view social media, social media reveals the same authoritarian structure as Foucault's panopticon regardless of the application.

• If we look at his first principle: Social Media has pervasive power. The platforms we

use have the power to see everything. Because of the predictive nature of algorithms,

they even have the ability to see and even know our behavior before we do.

• Looking at his second principle: Social Media has obscure power. Because of the

nature of the platforms we use, we have no way to see who is watching us. Except this

time it isn't only guards or Big Data, it can potentially be anyone, from friends,

potential employers, complete strangers to foreign governments..

• In his third principle he saw physical violence replaced with structural violence.

Guards in this prison no longer had to threaten prisoners with physical violence, in the

world of Social Media, Doxxing (publishing personal or identifying information on the Martinez, Mari Page 22 of 65

internet for malicious intent), hacking (gaining unauthorized access to data in a system

or computer), (using the internet to harass an individual or group on the

internet) use data to find and cause harm to people in real life. We also have to give

very real attention to Disinformation and Information Warfare, and how our data is

being used against us.

• Last, he speaks of the profitability of this structural violence. Which brings us to

Surveillance Capitalism.

Surveillance Capitalism

Shoshana Zuboff introduces the idea of surveillance capitalism in her book The Age of

Surveillance Capitalism. This refers to the scraping of our personal and private data to make behavioral predictions about us. She explains it further in this interview with Matt Frei with

Channel 4 News in the United Kingdom.

Channel4News, & Frei, M. (2019, September 23). Shoshana Zuboff on 'surveillance capitalism' and how tech companies are always watching us. Retrieved from YouTube on December 2, 2020.

Shoshana Zuboff explains how targeted ads and personalized services are only a small piece of the data that are being collected about us. Martinez, Mari Page 23 of 65

Shoshana Zuboff: A Profound Misconception of What is going on

VPROinternational, & Duong, R. (2019, December 20). Shoshana Zuboff on Surveillance Capitalism | VPRO Documentary. Retrieved from YouTube on December 2, 2020.

Training Models & Behavioral Surplus

Shoshana Zuboff explains how some of this data is used to improve services. However, even more of it is used to train models to predict how you will behave not only now but later on.

VPROinternational, & Duong, R. (2019, December 20). Shoshana Zuboff on Surveillance Capitalism | VPRO Documentary. Retrieved from YouTube on December 2, 2020.

We now have a definition of Surveillance Capitalism, but how does it effect us … really?

• What are they really able to do with the data they collect on us?

• And how are they actually profiting from it?

• Who are They?

Let’s take a look at how we got here, and try to answer some of these questions by taking a deeper look into how things began with one of the biggest “They’s”: Google.

Google

Founded by Larry Page and Sergey Brin in 1998, Google started as an idea to build a better search engine. Instead of ranking a page by how many times search terms appeared on a Martinez, Mari Page 24 of 65 page (which is what search engines did at the time), they created an algorithm with Scott Hansen that analyzed relationships among websites (Hosch). This look at relationships – and finding connections for better predictions for searches along with the data cloud or trail we leave behind

– turned into a business model that would not only change the way we find things on the internet, but would have very real consequences.

These clips explore the information that Google mines (collects), the Google business model, as well as Google’s history.

Aleister McTaggert: Google Mining the Data of Your Life

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Shoshana Zuboff: Google History & Business Model

BSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Behavior Restriction

Does collecting all of our personal data really give us all of the options that Google tells us it does? By micro-targeting ads and options online, Jaron Lanier (computer philosopher and Martinez, Mari Page 25 of 65 scientist, also considered one of the founders of virtual reality) believes this not only reduces our options but leads to lack of freedom and behavior restriction on the internet.

VPRO documentary. The Real Value of Your Personal Data. YouTube, uploaded by VPRO, June 11, 2017,

www.youtube.com/watch?v=dW7k_GZYLwk. Retrieved on February 23, 2021

Facebook took Google’s Model to the Next Level

Before you think that not using social media (or specifically) means that it doesn’t impact you:

1. Instagram and WhatsApp are owned by Facebook. If you use Instagram, you

use Facebook.

2. Though Google and Facebook are being highlighted, it isn’t just these companies

taking, buying and selling your data. This is the new business model for all companies.

This information can also be taken from store credit cards, loyalty cards, apps, etc.

3. If a service is free, you are the product. Meaning if you downloaded a free app, you are

paying with the data they are collecting and selling about you when you clicked “I

Agree” on their Terms of Service (ToS).

4. Most important: It no longer matters if you use the internet or social media at all. These

things have real-world impacts and consequences. Martinez, Mari Page 26 of 65

Roger McNamee explains more about Facebook and the data Facebook began collecting and purchasing about you. Cathy O’Neill speaks about the issues with the Facebook algorithm.

Roger McNamee (Author, Ex-Facebook CEO: Facebook

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Cathy O’Neill (Author & Data Scientist): The Facebook Algorithm

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved December 2, 2020.

Social Contagion Experiments

In 2010, Facebook began conducting social contagion experiments to see if they could change real-world behavior. Not only were they successful, they also found they could do it without user awareness (Meyer).

Shoshana Zuboff: Changing Real World Behavior

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Martinez, Mari Page 27 of 65

THREAT TO DEMOCRACY

Without care, intervention and responsibility we are at the whim of whoever owns the technology, or whoever can pay for it. These technologies aren't just a threat to our privacy, they are a threat to any democratic society. As Stephanie Hare mentioned in the previous video, governments can change. Even though we may feel safe with these technologies under the current administration or regime, that can change. Technologies that once provided comfort, ease, and protection, under different control can easily be turned authoritarian. As Cathy O’Neill explained previously, the Facebook algorithm is especially susceptible to misuse. So, what does the Facebook algorithm look like in real world situations? Jaron Lanier explains what happens with the Facebook algorithm, and why we don’t always get the outcome we expect during protests and movements such as the Arab Spring.

Jaron Lanier: The Facebook Algorithm Threat

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved from YouTube December 2, 2020.

Martinez, Mari Page 28 of 65

We can already see these technologies being used by authoritarian governments to keep certain populations imprisoned. One example that Sophie Richardson explains is the humanitarian crisis of the Uyghurs in China.

Sophie Richardson (Dir. Watch): Authoritarian Governments

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Joshua Bengio discusses further the overall threat these technologies pose to democracy.

Joshua Bengio (Pioneer of Deep Learning): Democracy Threat

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube o=n December 2, 2020.

RESPONSIBILITY AND CHANGE

So, is it all just doom and gloom? I hope not, but there is real change that needs to happen now.

Technology is changing so quickly and moving at such an advanced rate we don't have the time to wait to make these changes. We must act now. Martinez, Mari Page 29 of 65

Joshua Bengio (Pioneer of Deep Learning): Scientist Responsibility

PBSfrontline. (2019, December 2). In the Age of AI (full film) | FRONTLINE. Retrieved from YouTube on December 2, 2020.

Michal Kosinski (Computational Psychologist): Warning

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved December 02, 2020,

Cathy O Neill (Author and Data Scientist: Singularity

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO documentary – 2018. Retrieved December 2, 2020.

MIS/DISINFORMATION

Who and what do we believe when we find ourselves in this new world of ?

When the algorithms work against us and we can’t see who is behind the news story, the viral story or video, and even our politicians spread . For transparency reasons and Martinez, Mari Page 30 of 65 because of the nature of this topic, all resources for this portion are linked directly to the topic and in the text for quick and easy access.

Let’s Begin with Fake News and the FCC Fairness Doctrine

In 1929, the Federal Radio Commission, the predecessor to the FCC, in its Great Lakes

Broadcasting Co. Decision stated that the “public interest requires ample play for the free and fair competition of opposing views, and the Commission believes that the principle applies to all discussions of issues of importance to the public.” This meant that broadcasters were not only obligated to cover a topic fairly but could not express their own views. This was to “ensure that broadcasters did not use their stations simply as advocates of a single perspective.” (First

Amendment Encyclopedia)

By 1940, the restriction on personal views was lifted, and time was allotted for discussing the topics after both sides were presented. By 1987, journalists began to push back on personal attacks and said they (the journalists) should make decisions about balancing the fairness of a story and not the FCC. It was completely overturned and abolished in 1987.

Today we have several things working against us. We have news agencies who have taken advantage of the abolition of the FCC Fairness doctrine and do exactly what the original ruling was meant to protect: Their stations advocate a single perspective. They also determine what they feel is a fair and balanced news story.

We need to start asking the question: Should a random user posting on social media (or elsewhere) be given the same amount of airtime on a topic as a professional in their field? Doing this suggests that they (a random user and a professional in the field) have the same authority on a subject, but is that true? If we think they should be given the same authority, then why should Martinez, Mari Page 31 of 65 they be giving the same authority? What value does being a professional in a field hold if any random person can now be an authority on any topic? Who are we supposed to listen to? Who should we be listening to?

When major media outlets flood the news stream with their personal perspective and the media outlet determines the free and fair competition of opposing views and can ultimately bring in anyone, authority or not on a subject – where do we really turn for an unbiased source? What does unbiased look like?

(More information about the FCC Fairness Doctrine can be found at the First Amendment

Encyclopedia)

FAKE NEWS is divided into 3 categories:

• News that is made up or invented to discredit others or make money.

• News that has basis in fact but is spun to fit a particular agenda.

• News that people don’t feel comfortable hearing or don’t agree with.

This is done for many reasons including:

• It’s cheaper to make.

• It’s difficult and costly for people reading the information to tell the difference

between what is and what isn’t accurate.

• People enjoy reading it because it confirms their beliefs.

So, what is real or quality news?

People are able to get information from a variety of sources they deem reputable: teachers, clergy, politicians, parents, friends and family, people they know and trust. What if that Martinez, Mari Page 32 of 65 information is incorrect or false? What happens when those we consider reputable sources are no longer the source of good information? How would we know?

Here are some resources to help understand what real or quality news should look like:

• “What do we Mean by Quality News” by Aviv Ovadya

• “Here’s what Non-Fake News Looks Like” by Columbia Journalism Review

• “Fake News vs. Real News: Tips for Evaluating Information” by Northwest Arkansas

Community College

(This and more information can be found in the Study for the “Assessment of the

Implementation for the Code of Practice on Disinformation”)

Identification

This doesn’t mean not to trust anyone, it means to proceed with caution. We also need to proceed with caution when using a word like facts. When we look at data, data can change, and we must proceed with the best data available. We need to be diligent. The following is taken from the

Fact Disinformation Guide. For more information on these categories and more, please click on the guide.

Check the Source

• Check the URL, a lot of sites spoof well established media outlets

Does the URL look fake? Are there misspellings? Or strange domains like .xyz, .ir, .ph,

etc.

• Does their website have an about page? Sometimes the information about the company

will not match the website.

Does the Story Provide Context? Martinez, Mari Page 33 of 65

• Does the story have a provocative headline or title?

• Is the title just click-bait and not supported by the content of the article?

• Check the date the story was written, make sure they aren’t providing old .

Look for Exaggerated, Sensationalist Language

• Does it spark strong emotions as a way of focusing on emotions rather than facts?

• Exaggerated language should be treated as a red flag for potential disinformation or

biased information.

Does the Outlet Openly Reveal Their Information Resources?

• Use of abstract terms like: “American researchers concluded,” “Scientists agree that.”

If they are unable or unwilling to reveal their resources this is another thing to look out

for.

Poor Grammar and Spelling Mistakes

• Are there spelling and grammatical errors?

• Is there awkward use of language? A lot of disinformation is created by non-native

speakers and translated through online translators. This is another red flag.

Use Fact Checkers

• Fact Check

• Digital Forensic Research Lab

• Media Bias Check

• Polygraph

• These are just a few…

(Resource Links: Fact Disinformation Guide This guide also includes information on how to spot bots & trolls online, and how to check if photos & videos are real or fake) Martinez, Mari Page 34 of 65

Misinformation, Disinformation, and

Two words that became buzzwords after the Cambridge Analytica scandal are misinformation and disinformation. Let's start with the definitions and what they mean. Let’s first talk about the difference between misinformation and disinformation. They both are information that is false, or misleading. These are similar, but when we think about these things, we need to look at one major component: the source for these plays a key part. Think of it this way:

• A regular user, posting, sharing or retweeting anti-vaxx information

is misinformation.

• Controlled, and concerted efforts to spread specific information that is untrue

is disinformation. It is deliberately distorted information that is leaked into the

communication stream.

It may seem like a small difference, but for this part of the conversation, it makes a HUGE difference. Misinformation is spread everyday, disinformation is Information Warfare.

The word propaganda has been thrown around a lot more recently along with terms like

Fake News. It’s easy to think that we know what people mean but what is the difference between

Disinformation and Propaganda?

• Propaganda tries to convince us to believe something

• Disinformation is a highly organized attempt to deceive us into believing something.

This is an oversimplification, but for the purposes of this page, and the following information, this is the easiest way to think of it. Some of the disinformation that we see is propaganda, some of it is just intended to confuse us, to flood us with information with two aims: Martinez, Mari Page 35 of 65

• To create fatigue. To generate so much information false or otherwise that we

become too tired to do anything about it.

• To generate so much information that we stop knowing who or what to believe.

(Resource Links: UNESCO, Times, European Commission Final Report, LSE

Consulting, Sept 14, 2017 Congressional Hearing, Sept 26, 2019 Congressional Hearing, July 16,

2019 Congressional Hearing, Senate Committee Intel Report)

Active Measures

The last term you should be familiar with before moving forward is the term: Active

Measures. This was a term coined during the Cold War during 1950's USSR to describe "covert and overt techniques for influencing events and behaviors in foreign countries" (LSE Consulting

Final Report). Disinformation was one major element of this operation. Other elements included:

Front organizations, agents of influence, fake stories in non-Soviet media, and forgeries.

The goal of is to create distrust in the government, the media, and each other so that regardless of the abundance of information that is available people are unable to make sensible conclusions about anything. It is intended to erode trust in the government and government institutions, and pit groups (any groups) against one another: racial groups, gender groups, age groups, anything to help sow discord in the community.

(Resource Links: European Commission Final Report, LSE Consulting, Sept 14, 2017

Congressional Hearing, Sept 26, 2019 Congressional Hearing, July 16, 2019 Congressional

Hearing, Senate Committee Intel Report, Christopher Andrew: The Sword & the Shield)

What does it all mean? Martinez, Mari Page 36 of 65

Because of our technological landscape, information can spread in ways, at speeds and reach people it never did in the past. This also means that misinformation, disinformation, and bad actors can easily take advantage of this. How do we protect ourselves and others when we find ourselves in a post-truth landscape? What happens when truth no longer matters and algorithms are set up in a way to help the bad information spread faster?

There are hallmarks, or tell-tale signs that we can watch for when reading information on the internet. However, this means that until companies like Google, Facebook, and other Big

Data are held accountable for their part in the spread of mis- and disinformation, regulations are passed, or we as private citizens can reclaim some ownership of our personal data, we need to understand not only what we’re looking at but take responsibility for what we share online.

The following should be thought of as a road map. Here are the things to think about, things to look for, and examples of disinformation and Active Measures (this a compiled list from various sources that you can find in the Mis/Disinfo section of the citations page as well as direct links as the bottom of the “Hallmarks” section to various resources).

Hallmarks of Disinformation

Division

Getting groups of people to distrust one another. Instead of trying to make things better,

these stories try to make things worse. Whether it’s racial groups, gender groups, or even

age groups against one another.

Big Bold Lies Martinez, Mari Page 37 of 65

Creating a lie so outrageous that it couldn’t possibly be believed, or if they could get

people to believe it, it would be damaging. This needs to have a tiny bit of truth to it, so

that the lie (over time) will become easier to believe.

Concealment

Hide the origin of the story or make it so that no one cares or asks where the story started.

Or if people look for the source, hide the origin of the story to make it look like it came

from somewhere else such as using fake websites.

Enablers

Also known as the “useful idiot.” Find someone willing to take the message and spread it

to more people. The more credentials someone has or online or media presence the better.

In some cases, credentials are falsified.

Deny Everything

When someone does present facts: deny everything. Deny, deny, deny.

Repetition

Repeat the lie and deny as much as possible. Tell a lie enough times and someone, or

even thousands will believe it.

(Resource Links: UNESCO, New York Times, European Commission Final Report, LSE

Consulting, Sept 14, 2017 Congressional Hearing, Sept 26, 2019 Congressional Hearing, July 16,

2019 Congressional Hearing, Senate Committee Intel Report) Martinez, Mari Page 38 of 65

Working Example

1983 This news story about the origin of the AIDS virus was a fake news story release by the . It was published in a small newspaper (Patriot) in New Delhi that was revealed later to have Soviet funding. Image courtesy NYT

1987 Within four years a seemingly small story, fueled by the AIDS crisis, was on the nightly news and being reported by major news outlets. There are still echoes of this story online today. Image Courtesy MIT

2020 By spring 2020, stories began emerging of COVID-19 (Coronavirus) having been started in a lab in the US. However, this time it wasn’t the Soviet Union, it was China in a war of disinformation with the US. Image Courtesy GEC Fact Sheet

How it worked: Martinez, Mari Page 39 of 65

• The Soviet Union concealed the original story by using The Patriot Magazine to publish

the story. By using a local newspaper with global reach they were able to hide where the

story came from. This is still done today by using proxy websites like Global Research,

fake social media accounts to friend locals in communities (it’s easier to believe “friends”

than strangers), things that will put distance between the real funding source and the

consumer (you) reading the information.

• They found an enabler, a “doctor” who was willing to spread the message. People listen

to doctors as authorities, and this is often exploited by using people with fake credentials

(as they did in this case) or seemingly sympathetic ones. Many times, people in this role

do not know they are working as a mouthpiece for foreign governments. Today many of

these people push new stories from Today (or RT) and Sputnick, known Russian

propaganda and disinformation sites.

• They created a big bold lie, when confronted about it they denied it, and repeated the lie

in different media outlets until it was finally repeated in the US.

This is done today by many different bad actors, and with the use of technology, it is easier to conceal, and spread the information at alarming rates. Algorithms are currently designed to help the spread of bad information, and it is imperative we quickly learn how to identify the good information from the bad.

The Following are Example of Past and Current Disinformation and Active Measures

Campaigns:

Martinez, Mari Page 40 of 65

QAnon (Image Courtesy of Rolling Stone)

Jade Helm 15 (Image Courtesy of NewsWeek)

Mike Cernovich PizzaGate Tweet

Martinez, Mari Page 41 of 65

Blacktivist (Image Courtesy of CNN)

Flight MH 17 (Photo Courtesy of

Wikipedia)

COVID 19 Created in US Lab Tweet

Martinez, Mari Page 42 of 65

Dr. Martin Luther King Jr (Image Courtesy of AP News)

On the Horizon

The last thing is relatively new technology that is making headlines: Deep Fakes. Deep Fakes are a form of AI that use deep learning to create images, videos and sound recordings of people that are entirely fake. The technology used to make these videos is still a ways off from being a significant threat. Mostly the tech today is used to swap faces and voices on celebrities as shown in the video below. However, the example of a TikTok of Tom Cruise in February of 2021 shows how much the technology has improved since it was created in 2017.

Celebrity Deep Fakes

Retrieved from Youtube in March 2021

Martinez, Mari Page 43 of 65

@yashar Tweet about Tom Cruise Deep Fake

THE OPEN

After a deeper look inside the technological black box, we have a better understanding of what is being input in the technology we use, the blind spots we d rather not think about. From racial bias in the algorithm, to digital panopticism and surveillance, to how those same algorithms are used to spread disinformation. What does all this looking inside reveal?

Let’s start by looking at what some have already said about technology. For Martin Heidegger, technology reveals 3 things:

1. Technology is not an instrument,

2. Technology is not the product of human activity, and

3. Technology is the highest or ultimate danger.

What does he mean by this? First, Heidegger says that technology is not an instrument, but a way to look at the world. It comes from the greek word techné, which means to come into being. Depending on whether or not we see technology as a neutral instrument (which is debated), or if it has bias (which is also debated), we are looking at the world through a different lens, in this case, a technological one. It shapes how we view the world around us. Technology is Martinez, Mari Page 44 of 65 a way of thinking, a way of looking at and interpreting what is around us. Next, he says technology is not a product of human activity. We think because we are able to make technological things that they are a product of us (Verbeek). However, according to Heidegger, the things we create are influenced by everything around us, things we see, don’t see, things we experience – and because of this we cannot truly understand what brings the technology into being. Its origin is unknowable. We don’t (and can’t) choose how we understand it, nor do we choose the frameworks in which we understand it, so though the final product may appear to be created by us but really comes from somewhere else, an unknowable place. Last, he says technology is the highest or ultimate danger. This is for two reasons, first because we might stop seeing ourselves as beings that can have these interpretations of the world and see ourselves as raw material. Second because every attempt to develop a new understanding of the world is a way of exerting power over the understanding. There is no escape from the power over the will to power (Verbeek).

An example of what he means by all of this is hydroelectric power. This is where a turbine converts the energy of flowing water into electricity. For Heidegger, first we have to have a technological view of water, we have to see the power that water possesses. Wanting to cultivate that power, where this idea originates from is influenced by our environment, our upbringing, people, our community needs, countless and unknowable things. We only see the power inherent in the water, because we see the water as raw material, we then exert our own power over it to draw the hydroelectric power from it. For Heidegger this is our greatest peril.

Technology not only has the ability for us to see ourselves as raw material, but once we do, we cannot escape the will or desire to exert power over it (us). Martinez, Mari Page 45 of 65

Another example of this is a car. A car can only come into existence because we see nature as raw material. Without seeing things like aluminum and copper as raw material, the car could not come into existence. The images that pass by our window as we drive or are passenger shape how we see the world (Verbeek). Cars then become the source and outcome of technology, as Heidegger sees it. What then, does a self-driving car reveal? We have now not only exerted power over the technology, to the point we can be removed, but how does this shape the source and outcome of the technology? What does this say about us as raw material? What does this reveal?

Let's look at what Heidegger says about this revealing or unconcealedness? For

Heidegger, this disclosure is found in the open. For Heidegger this revealing or unconcealedness, as he calls it, is a space of absolute truth or Being. Heidegger talks about this space in relation to humans and animals. This concept is taken from the eighth of Rilke’s Duino Elegies. For Rilke, because man is in the world, and has the world around him he can never enter or go outside of this space, only the animal can move in this space. The animal sees the open. For Heidegger, because the animal is unaware that he is in this space, the animal is shut out of this space. For

Heidegger the distinction between man and animal is language and the ability to enter into dialogue. It is our language that enables us to come face to face with the open, and ultimately

Being.

For Giorgio Agamben, the open is the space where humans and animals become indistinguishable. For Agamben if language is what puts us opposite the open, then the lack of language is also when the confrontation ends. For Agamben this moment of suspension is where humans and animals are indistinct, and humanity (as well as animality) are reconciled. Martinez, Mari Page 46 of 65

For Agamben, here is where we have a “remembrance of the oblivion,” a space where we can remember to forget the possibility of being (Bartoloni).

For these thinkers, we understand ourselves in relation to animals and language.

However, as technology becomes an integral part of our lives, as algorithms collect more data on us, and AI becomes more aware, this blurring of lines between humans and technology becomes a very real space. What happens when we examine this space not just for humans and animals, but for humans and technology? If it is language that helps us confront this space, what is reconciled when we do not speak the same language as the technology.

I’ll start with an example: In 2017, Facebook challenged two AI to negotiate a trade, this resulted in the two chatbot AIs making their own changes to the English language so they could more easily communicate with one another (Griffin). These modifications were not understood and could not be interpreted by the humans watching. Though their chat was incomprehensible to humans, the AI were able to successfully negotiate the trade. Facebook terminated the experiment.

Humans no longer understood the language created by the AI. Humans programmed the AI using a, or multiple programming languages, then taught the AI to speak to one another using the English language. The AI then used English to fulfill certain parameters then created their own language. Those who wrote the code were influenced by their own world view, just as the AI were influenced by their own world view and created their own technology to communicate bringing us back to Heidegger's second point regarding technology, it is not a product of human activity, in this case, not a product of the AI’s activity, but impacted by the

AI’s world view. The AI created a new language, developing their own technology for better understanding of their world view. It removed the need for humans, like the self-driving car, it Martinez, Mari Page 47 of 65 became the source and outcome of the technology. In this case, we cannot begin to understand what it (the AI) sees, or how it (the AI) began to exert its power over the natural resources (the coding language, the physical structure that contained it, or even the humans surrounding it) it had access to. For us (the humans), we only saw the outcome, and we didn’t understand it. How do we begin to?

What is revealed or unconcealed? What will technology like AI see when we (humans) are reflected back to it? And what will be revealed to humanity when the confrontation happens between the language of technology (in this case AI) and our lack of inaccessibility to technology’s language? What does the self-driving car understand? What is the car itself able to see and experience? Will it see its own self as a source of raw material? What rights will it view as inalienable? If we swap out the engine or the motherboard is it the same vehicle, with the same experience? Is the open the space where it reconciles the raw material of its own humanity or technocality? Will AI experience the open for itself? Will it see itself reflected back through our eyes? Will it be able to reconcile its own Being? Will it see itself as technology created by humans, or only see humans for the raw material that we possess?

Yuk Hui reconsiders these western questions of technology, with cosmotechnics. Hui challenges Heidegger's idea that there is only one kind of technology, for Hui there are many. If technology is indeed influenced by the unknowable (Heidegger’s point two) then it is specific to

Western culture since this idea originated from the word techné, which is Greek, and only came about in Western philosophies. If it is specific to Western culture, then how do other cultures connect or reconnect to something there was never an origin for in their own culture? Hui believes a new cosmological understanding of technology that takes culture into account, that is non-universal must be made. Martinez, Mari Page 48 of 65

Speculation: If technology is, as Heidegger suggests, not an instrument but a way to look at the world, and according to Hui, there are many types of technology, then is it possible through technology to experience someone or something else’s Being? If the open for technology is created by the confrontation between human and technology, but that technology could be one of many, and the languages one of many (consider the many programming languages in use), then is the open that is experienced not universal? Can two technologies experience their own open if they speak different languages or have different technological technologies (Heidegger’s point one) undisclosed to us? And how is that reflected back to us? Would we even know it is happening or happened? Would we have access to this iteration of Being?

With virtual reality today and the use of techniques like embodied montage we are already blurring the lines of what are able to experience. Embodied montage draws from the idea of film montage and is adapted to virtual reality (VR) story telling. In montage, rather than creating two sequences of shots that follow seamlessly, filmmakers take to contradicting shots that when combined create a third meaning that didn’t exist in either shot alone. In VR, this third meaning can be done with new connections between the body and environment and between action and perception. IRL we expect certain movements to do certain things. When we look at something, we see what is in front of us, in VR our eyes are no longer restricted to only seeing what’s in front of us. They now are in an environment where we can see what is behind us without turning our heads, each eye could see different things, we can see the past, future, or the act of looking can trigger actions and specific movements. Real life gestures no longer have to mimic real life action (Tortum, Sutherland). We can experience other places and spaces in time as if we were actually there. We can experience other people’s memories, lives, etc. One woman Martinez, Mari Page 49 of 65 resurrected her dead daughter in virtual reality (Al Jazeera). Is it so far-fetched that we would be able to experience their Being?

Much of modern technology is a black box, sealed off to us, undisclosed, unrevealed.

Even if we are able to look inside, do we really understand what we are seeing? Even if we can read the code that goes into it, if you’ve ever tried to debug someone else’s code you know how each coder is unique, and in many ways their code a reflection of their thinking, a shadow, only a piece of who they are, and still unconcealed. Even if one builds the microchip, it doesn’t mean they know what their microchip will be used for, or how it will be used. I like to hack hardware.

Personally, I like to think of this as a repurposing of things. For example, I hacked my Wiimotes and used them to control a drum patch I made in Max 7 because I wanted to use my Wii controllers as a musical instrument. I later ended up using them to control video, and sound activated by motion and button/trigger presses. I turned them into an audio-visual instrument.

Nintendo didn’t make the Wii controller to be used for this purpose, what does the intention versus the actual outcome of these technologies reveal?

How does truth get revealed to us? Does it become as unknowable as Heidegger's animal in the open. Or is the suspension of human and technology already there, does it already exist in every piece of modern technology that we use? What does modern technology already reveal to us about our humanity? Or is technology a potential gateway, and by using different technologies we can gain access to other understandings, tools of Being and potentially access that which is undisclosed to us?

My technological exploration has revealed (to me) 3 things about technology: interaction, interchange, and access/accessibility. Martinez, Mari Page 50 of 65

What do I mean by this? First let me start with interaction. Digital technology is interaction. We directly communicate or have direct involvement with someone or something.

We have the capacity to have a direct effect on the behavior or development of someone or something as well as ourselves. Because of the speed of digital technology (in Heidegger’s view that technology is a world view) technology has now become a behavior (in the world, as well as in virtual spaces).

An example of this comes out in the way we speak. We tell people to “Google it.” It is not just a search feature, but a way to explain to someone to find something out for themselves

(usually an important topic, that someone hasn’t taken the time for themselves or bothered to become familiar with). Swiping right or left. This moved from the dating app world of Tindr to real life space of saying it. These are interactions, ways we behave, engage and each has specific behaviors associated with them. There are also distinct personalities. For example, people who use iPhones vs Samsung, Apple iOS vs Windows OS. I know people who have Apple tattoos.

There are culture wars over Apple and PC, just as there are culture wars over Xbox and

Playstation.

Next is interchange: we can exchange things, ideas, commerce, data, even put ourselves in each other's place for a moment with virtual reality. That interchange can also be unknown to us. We don’t always know who we are exchanging information, and data with, or who is even watching. Regardless, digital technology has created a space where we are never alone. There is always interaction and interchange whether we want it or not.

An example of this is google docs or any of the google apps. If I am online, I am being tracked with cookies as part of their terms of service, I am allowing my data to be collected, and

I have no idea what data is being collected or how that data is being used. I am having an Martinez, Mari Page 51 of 65 interchange with another entity even if I am alone in my room doing homework in my pajamas.

Regardless of my knowledge of what is happening on the other end, I am still having an interchange. It may not be as overt as if I am posting on Twitter and someone responds, or

“hearts", or retweets my tweet (even if someone doesn’t directly do one of these things, I can still check my stats on a tweet and see how many interactions [interchanges] my tweet has had by nameless/faceless entities), but it is still happening. If I have location data turned on on my phone, if I am using Alexa or Siri, whether I am actively engaging with them or not, data is still being collected, and interchange is still happening. I just don’t have access to all that is being exchanged. Exchange from interchange isn’t always equal. The information being collected or observed doesn’t mean that it benefits everyone the same.

And last, access/accessibility. First, access is entry, the permission to enter, or ability to enter digital technology. This comes in many forms, terms of service (ToS), posting online, sharing a Netflix password, data breaches, Google Home, even having the location services turned on your phone. The second part of this is accessibility and who has the ability and permission to enter your digital lives, but also who can own, obtain, or acquire digital technology in the first place.

Digital technology is supposed to make our lives easier, more connected, but it has created unlimited access points to our lives. Even those who wish to be unplugged from the world are deeply impacted by digital technology. With ToS of credit cards, store loyalty cards, etc.. We may choose to not use a smartphone, but if our friend does, our meeting with them can still be recorded by their phone's location data. Our contact information is still stored in their phone. The Equifax data breach showed that as long as you have a credit history, any credit Martinez, Mari Page 52 of 65 history, access to you is possible, regardless of your personal digital footprint. Did you get your

$100 from the Equifax data breach (Equifax Data Breach)?

The COVID-19 pandemic showed us the disparities in accessibility to digital technology.

The attempt to move classroom and workspaces online proved difficult for underserved communities and communities of color. There were stories in the headlines of students sitting in parking lots to use Wi-Fi they did not have access to at home (Inside Higher Ed). A quarter of low-income teens were found to have no access to a home computer, and one in five low-income students were found to have unreliable access to the internet or a home computer (Auxier).

Access is an accessibility issue.

Digital technology has become a way to interact or behave with the world. Heidegger used technology as a way of thinking of the world, today with digital technology we use technology as a way to interact with the world. We behave technologically. We no longer accept or reject people, we swipe right and left on people. The language we use on texts/Tweets/Posts has become the language we use IRL. Terms like OMG (oh my god), YOLO (you only live once), even my use of IRL (in real life) is cyberspeak that has crept into our everyday language.

Let’s go back to the example of the car. For Heidegger, the car would be the source and the outcome of technology. We drive it, and it changes our point of view of the world. We also have to see the world as raw material in order to bring it into existence. Heidegger’s view of technology dismisses or minimizes the role of the driver. With digital technology, it is so embedded in our lives, that technology is a behavior, an action, and it is just as important (if not more so) than technology as a meditation on our lives. The driver of the car has responsibility.

They have to perform certain actions and behaviors, and agree to certain laws, and have certain qualifications in order to be able to drive in the first place. There are consequences if they don’t. Martinez, Mari Page 53 of 65

The behavior of driving can be impaired, be impacted by accessibility, interactions by passengers, other cars (on the road, for sale, being built, etc..) and the world at large. By removing the personal responsibility of the driver in Heidegger’s technology, he has removed any responsibility we have to the technology. The resources, the world, the technology is there, so the only option we have is to see it as raw material and exert our power over it, but is this true?

With digital technology, our very language has become that raw material. One of the key elements that gives us awareness of the space of the open is reduced to raw material. If

Heidegger’s view of technology is correct, then we must exert power over our own Being. The language that allowed us to enter into dialogue is now raw material, and we will only ever see the power in and over our own Being. For Agamben, we are already indistinct. The lines between human and technology have already blurred with this constant confrontation with language we have every time we interact with digital technology. Each time we interact with our phone, our online profile, our email, Alexa, our smart TV, these languages collide, and we are indistinguishable from technology. Whether we realize it or not, his open is happening all the time. Our collision with Being, our moment of suspension, we are already indistinguishable from the technology.

By restoring our interaction, interchange and access/accessibility with digital technology, by putting ourselves back in the driver's seat if you will, it doesn’t just place a body or a placeholder in the seat, it creates a space for individualization. For personal experience, for personal responsibility. It forces me to look at what I bring to the technology, to the thought process, to the outcome while I’m in the driver's seat. What is my background, what is my privilege, what can I change, who can I help, who should I help and why does it matter? As the Martinez, Mari Page 54 of 65 driver I have responsibilities, rules to follow, do those rules need to be changed, who do they benefit, who has access to this vehicle, who doesn’t have access to this vehicle, who should and who shouldn’t without my permission? By putting us back in the driver seat it allows us to ask important questions about digital technology, about our behavior online, offline, when creating it. Questions that we should have been asking a long time ago. Heidegger’s questions are great, and Hui brings us really good stuff too, but we need to start thinking about the ethics of digital technology, and the best way to move forward is to remember what we did to get in the driver seat, our responsibility and the potential outcomes from our time while there.

Martinez, Mari Page 55 of 65

Works Cited

Agamben, Giorgio, and Kevin Attell. The Open: Man and Animal. 1st ed., Stanford University

Press, 2003.

Ali, Y. (2021, February 21). Yashar Ali  on [Twitter]. Twitter.

https://twitter.com/yashar/status/1365133854248345600?s=20

Al Jazeera. “Mother ‘Reunites’ with Dead Daughter in Virtual Reality.” Science and Technology

News | Al Jazeera, 14 Feb. 2020, www.aljazeera.com/news/2020/2/14/mother-reunites-

with-dead-daughter-in-virtual-reality.

AlJazeeraEnglish, & Rae, A. (2019, July 03). Can We Trust Algorithms? | All Hail the

Algorithm. Retrieved on December 02, 2020, from

https://www.youtube.com/watch?v=w6D-fSkcVHw

AlJazeeraEnglish, & Rae, A. (2019, July 03). Do Biometrics Protect or Compromise Our

Security? | All Hail the Algorithm. Retrieved from YouTube on December 02, 2020.

Andrew, C., & Mitrokhin, V. (1999). The Sword and the Shield: The and the

Secret History of the KGB (1st ed.). Basic Books.

Atlantic Council. “Digital Forensic Research Lab.” Atlantic Council, 22 Feb. 2021,

www.atlanticcouncil.org/programs/digital-forensic-research-lab.

Auxier, Brooke, and Monica Anderson. “As Schools Close Due to the Coronavirus, Some U.S.

Students Face a Digital ‘Homework Gap.’” Pew Research Center, 16 Mar. 2020,

www.pewresearch.org/fact-tank/2020/03/16/as-schools-close-due-to-the-coronavirus-

some-u-s-students-face-a-digital-homework-gap. Martinez, Mari Page 56 of 65

Bartoloni, Paolo. “The Open and the Suspension of Being: A Review Article of New Work by

Agamben, Heller-Roazen, and Smock.” CLCWeb: Comparative Literature and Culture,

vol. 7, no. 3, 2005.

Boetie, E. D. (2018). Politics of Obedience: The discourse of voluntary servitude.

Bollier, D. (2013). Sousveillance as a Response to Surveillance. David Bollier News and

Perspectives on the Commons. Retrieved November 30, 2020, from

http://www.bollier.org/blog/sousveillance-response-surveillance

Brown.edu., Internalized Authority and the Prison of the Mind: Bentham and Foucault's

Panopticon. Joukowsky Institute for Archaeology & the Ancient World. Retrieved

November 30, 2020, from

https://www.brown.edu/Departments/Joukowsky_Institute/courses/13things/7121.html

Burch, James. “In Japan, a Buddhist Funeral Service for Robot Dogs.” AIBO Robot Dogs Given

Buddhist Funeral in Japan, 25 May 2018,

https://www.nationalgeographic.com/travel/destinations/asia/japan/in-japan--a-buddhist-

funeral-service-for-robot-dogs/.

Callon, M., & Latour, B. Unscrewing the big Leviathan: How actors macro-structure reality and

how sociologists help them to do so.

Channel4News, & Frei, M. (2019, September 23). “Shoshana Zuboff on 'Surveillance

Capitalism' and How Tech Companies are Always Watching Us.” Retrieved December

02, 2020, from https://www.youtube.com/watch?v=QL4bz3QXWEo.

Commission on Security and Cooperation in Europe. (2017, September). The Scourge of Russian

Disinformation. Senate Intelligence Committee. Martinez, Mari Page 57 of 65

https://www.csce.gov/sites/helsinkicommission.house.gov/files/RussianDisinformation.p

df

Copeland, B. (2019, April 11). Artificial intelligence. Retrieved April 19, 2019, from

https://www.britannica.com/technology/artificial-intelligence

Chu, Arthur. “The Social Web and the Digital Panopticon.” TechCrunch, 18 Oct. 2015,

techcrunch.com/2015/10/18/the-social-web-and-the-digital-panopticon.

Ellick, A., Kessel, J., Westbrook, A. (2018, November 12). Operation Infektion [Video]. New

York Times. https://www.nytimes.com/video/what-is-disinformation-fake-news-playlist

Ewing, P. (2017, October 20). Russians Targeted U.S. Racial Divisions Long Before 2016 and

Black Lives Matter. NPR https://www.npr.org/2017/10/30/560042987/russians-targeted-

u-s-racial-divisions-long-before-2016-and-black-lives-matter

“Equifax Data Breach Settlement.” Federal Trade Commission, 15 July 2020,

www.ftc.gov/enforcement/cases-proceedings/refunds/equifax-data-breach-settlement.

“FactCheck.Org.” FactCheck, 2020, www.factcheck.org.

“Foucault 2: Government Surveillance & Prison | Philosophy Tube.” YouTube, uploaded by

Philosophy Tube, 26 May 2017, www.youtube.com/watch?v=AHRPzp09Kqc.

Furlong, R. (2020, March 4). MH17: Debunking Russian Disinformation [Video].

Frontex. “FACT, Digital Disinformation Guide.” Publications Office of the EU, 29 Sept. 2020,

op.europa.eu/en/publication-detail/-/publication/7317a29c-02ca-11eb-8919-

01aa75ed71a1.

Galič, M., Timan, T., & Koops, B. (2016). Bentham, Deleuze and Beyond: An Overview of

Surveillance Theories from the Panopticon to Participation. Philosophy & Technology,

30(1), 9-37. Martinez, Mari Page 58 of 65

Gehring, P. (2017). The Inverted Eye. Panopticon and Panopticism, Revisited. Foucault Studies,

46-62.

Geneva, M. U. S. (2021, January 21). GEC Fact Sheet: PRC manipulation of research to raise

doubt on COVID-19 origin. U.S. Mission to International Organizations in Geneva.

https://geneva.usmission.gov/2020/12/21/gec-fact-sheet-prc-manipulation-of-research-to-

raise-doubt-on-covid-19-origin/

Gillis, Alexander. “Biometrics.” SearchSecurity, 25 Aug. 2020,

searchsecurity.techtarget.com/definition/biometrics.

Griffin, Andrew. “Facebook’s Robots Shut down after They Develop New Language.”

TheIndependent, 10 Sept. 2020, www.independent.co.uk/life-style/facebook-artificial-

intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html.

Heidegger, Martin. Parmenides (Studies in Continental Thought). Indiana University Press,

1998.

Heidegger, Martin. Question Concerning Technology, and Other Essays, The. Harper

Torchbooks, 1977.

Hobbes, T. (2020). Leviathan. New York: W W Norton.

Hosch, William. “Google | History & Facts.” Encyclopedia Britannica, 2021,

www.britannica.com/topic/Google-Inc.

Hui, Yuk. “Cosmostechnics.” Angelaki, vol. 25, no. 4, 2020, pp. 1–2.

Hui, Yuk. “Cosmotechnics as Cosmopolitics.” Journal #86 November 2017 - e-Flux, 2017,

www.e-flux.com/journal/86/161887 /cosmotechnics-as-cosmopolitics.

Hui, Yuk. Recursivity and Contingency (Media Philosophy). Rowman & Littlefield Publishers,

2019. Martinez, Mari Page 59 of 65

Inside Higher Ed, and Colleen Flaherty. “Parking Lot Wi-Fi Is a Way of Life for Many

Students.” Inside Higher Ed, 8 May 2020,

www.insidehighered.com/news/2020/05/08/parking-lot-wi-fi-way-life-many-students.

Lemmens, Pieter. “Other Turnings.” Angelaki, vol. 25, no. 4, 2020, pp. 9–25.

Lewis, R. (2021, March 1). Malaysia Airlines flight 17 | Background, Crash, Investigation, &

Facts. Encyclopedia Britannica. https://www.britannica.com/event/Malaysia-Airlines-

flight-17

Life2Coding. (2018, September 16). Video Collections Part 4: Needs to stop right

now [Video]. YouTube. https://www.youtube.com/watch?v=shzwCxwqono&t=171s

Light, A. (2010). The Panopticon reaches within: How digital technology turns us inside out.

Identity in the Information Society, 3(3), 583-598.

Lo, Edwin. “Interview: A Thousand Cosmotechnics.” Research Network for Philosophy and

Technology | 技術與哲學研究網絡, 6 Mar. 2020,

http://philosophyandtechnology.network/1266/interview-a-thousand-cosmotechnics.

Lopez, G. (2016, December 8). Pizzagate, the fake news theory that led a gunman to

DC’s Comet Ping Pong, explained. Vox. https://www.vox.com/policy-and-

politics/2016/12/5/13842258/pizzagate-comet-ping-pong-fake-news

LSE Consulting, Cull, N., Gatov, V., Pomerantsev, P., Applebaum, A., & Shawcross, A. (2017,

October). Soviet Subversion, Disinformation and Propaganda: How the West

Fought Against it.

LSE Institute of Global Affairs. https://www.lse.ac.uk/iga/assets/documents/arena/2018/Jigsaw-

Soviet-Subversion-Disinformation-and-Propaganda-Final-Report.pdf Martinez, Mari Page 60 of 65

Mann, S. (2013). Veilance and reciprocal transparency: Surveillance versus sousveillance, AR

glass, lifeglogging, and wearable computing. 2013 IEEE International Symposium on

Technology and Society (ISTAS): Social Implications of Wearable Computing and

Augmediated Reality in Everyday Life.

McMullen, T. (2015, July 23). What does the Panopticon Mean in the age of Digital

Surveillance? . Retrieved November 30, 2020, from

https://www.theguardian.com/technology/2015/jul/23/panopticon-digital-surveillance-

jeremy-bentham

“Media Bias/Fact Check - Search and Learn the Bias of News Media.” Media Bias Fact Check,

23 Feb. 2021, mediabiasfactcheck.com.

Meyer, Michelle. “Everything You Need to Know About Facebook’s Controversial Emotion

Experiment.” Wired, 3 June 2017, www.wired.com/2014/06/everything-you-need-to-

know-about-facebooks-manipulative-experiment.

Northern Essex Community College. “LibGuides: Fake News vs. Real News: Tips for

Evaluating Information.” Northwest Arkansas Community College Library, 2021,

library.nwacc.edu/fakenews/evaluating.

O'Brien, S. A. (2018, August 8). Deepfakes are coming. Is Big Tech ready? Retrieved April 19,

2019, from https://money.cnn.com/2018/08/08/technology/deepfakes-countermeasures-

facebook-twitter-youtube/index.html

Ovadya, A., (2019). Retrieved April 4, 2019, from http://aviv.me/media

Ovadya, Aviv. “What Do We Mean by ‘Quality News?’ - Tow Center.” Medium, 5 Feb. 2020,

medium.com/tow-center/what-do-we-mean-by-quality-news-cec178704356. Martinez, Mari Page 61 of 65

Parham, J. (2017, October 19). Russians Posing as Black Activists on Facebook Is More Than

Fake News. Wired. https://www.wired.com/story/russian-black-activist-facebook- accounts/

PBSfrontline. (2019, December 02). In the Age of AI (full film) | FRONTLINE. Retrieved on

December 02, 2020, from https://www.youtube.com/watch?v=5dZ_lvDgevk

Pease-Watkin, C. (2002). Jeremy and – The Private and the Public. Journal of

Bentham Studies.

Perry, A., & Vile, J. (2017, May). Fairness Doctrine. The First Amendment Encyclopedia.

https://www.mtsu.edu/first-amendment/article/95/fairness-doctrine.

Phillips, James. “Restoring Place to Aesthetic Experience: Heidegger’s Critique of Rilke.”

Critical Horizons, vol. 11, no. 3, 2010, pp. 341–58.

Plasilova, I., Hill, J., Carlberg, M., Gaubet, M., & Procee, R. (2020, May). Study for the

“Assessment of the Implementation of the Code of Practice on Disinformation.”

European Commission. https://ec.europa.eu/digital-single-market/en/news/study-

assessment-implementation-code-practice-disinformation.

Plenke, Max. “The Reason This ‘Racist Soap Dispenser’ Doesn’t Work on Black Skin.” Mic, 9

Sept. 2015, www.mic.com/articles/124899/the-reason-this-racist-soap-dispenser-doesn-t-

work-on-black-skin.

“Polygraph.” Polygraph.Info, 2020, www.polygraph.info.

Posetti, J., & Bontcheva, K. (2020, June 4). DISINFODEMIC: Deciphering COVID-19

disinformation. UNESCO. https://en.unesco.org/covid19/ disinfodemic/brief1.

Qiu, L. (2017, December 13). Fingerprints of Russian Disinformation: From AIDS to Fake

News. . https://www.nytimes.com/2017/12/12/us/politics/russian-

disinformation-aids-fake-news.html Martinez, Mari Page 62 of 65

RadioFreeEurope/RadioLiberty. https://www.rferl.org/a/mh17-debunking-russian-

disinformation/30468377.html

Rilke, Rainer Maria, and Stephen Mitchell. Duino Elegies & The Sonnets to Orpheus: A Dual-

Language Edition (Vintage International). Bilingual, Vintage, 2009.

Robinson, B. S. S. A. O. (2020, April 26). Coronavirus: US and China trade conspiracy theories.

BBC News. https://www.bbc.com/news/world-52224331

Romele, A., Gallino, F., Emmenegger, C., & Gorgone, D. (2017). Panopticism is not Enough:

Social Media as Technologies of Voluntary Servitude. Surveillance & Society, 15(2),

204-221.

Santner, Eric. On Creaturely Life: Rilke, Benjamin, Sebald. New edition, University of

Press, 2006.

Schudson, Michael. “Here’s What Non-Fake News Looks Like.” Columbia Journalism Review,

23 Feb. 2017, www.cjr.org/analysis/fake-news-real-news-list.php.

Scott, R. (Director). (1982). Blade Runner [Video file].

Scott, R. (Director). (2006). Blade Runner: The Directors Cut [Video file].

Select Committee on Intelligence. (2019). Russian Active Measures Campaigns and Interference

in the 2016 U.S. Election Vol. 1. Senate Intelligence Committee.

https://www.intelligence.senate.gov/sites/default /files /documents/Report_Volume1.pdf

Select Committee on Intelligence. (2019). Russian Active Measures Campaigns and Interference

in the 2016 U.S. Election Vol. 2. Senate Intelligence Committee.

https://www.intelligence.senate.gov/sites /default/files /documents /Report_Volume2.pdf

Spacey, J., (2016, May 26). What is Artificial Knowledge? Simplicable. Retrieved November 20,

2020 from: https://simplicable.com/new/artificial-knowledge. Martinez, Mari Page 63 of 65

Steadman, P. (2012). Samuel Bentham’s Panopticon. Journal of Bentham Studies.

Subcommitte on Europe, Eurasia, Energy and the Environment. (2016, July). Russian

Disinformation Attack on Elections: Lessons from Europe. Committee on Foreign

Affairs. https://www.hsdl.org/?view&did=829237

Subcommittee on Investigations and Oversight. (2019, September). Online Imposters and

Disinformation. Committee on Science, Space and Technology.

https://www.hsdl.org/view&did=846024

Tatulyan, M. (2020, September 26). The Digital Panopticon for a New Type of Capitalism. UX

Collective. Retrieved November 30, 2020, from https://uxdesign.cc/the-digital-

panopticon-36630d297935

TeejMaximus. “Whites Only?” YouTube, uploaded by TeejMaximus, 3 September. 2015, www.youtube.com/watch?v=WHynGQ9Vg30. Retrieved from YouTube on December 02, 2020.

The MIT Press Reader. (2020, December 22). Lessons from Operation “Denver,” the KGB’s

Massive AIDS Disinformation Campaign. https://thereader.mitpress.mit.edu/operation-

denver-kgb-aids-disinformation-campaign/.

Verbeek, P., FutureLearn. “The Technological View of the World of Martin Heidegger.”

FutureLearn, 4 Feb. 2021, www.futurelearn.com/info/courses/philosophy-of-

technology/0/steps/26314.

VPROinternational, & Duong, R. (2019, December 20). Shoshana Zuboff on Surveillance

Capitalism | VPRO Documentary (P. Delput & R. Schuurman, Eds.). Retrieved on

December 02, 2020, from https://www.youtube.com/watch?v=hIXhnWUmMvw. Martinez, Mari Page 64 of 65

VPROinternational, & Kieft, M. (2018, October 26). Algorithms Rule Us All – VPRO

documentary – 2018. Retrieved on December 02, 2020, from

https://www.youtube.com/watch?v=NFF_wj5jmiQ.

VPRO Documentary, and Kieft, M (2017, June 11). “The Real Value of Your Personal Data -

Docu - 2013.” | VPRO Documentary. Retrieved on December 02, 2020, from www.youtube.com/watch?v=dW7k_GZYLwk.

Wark, McKenzie. A Hacker Manifesto. Beck, 2005.

Watts, K. (2020). Is Everywhere a Panopticon? Social Sciences . Retrieved November 30,

2020, from https://sunderlandsocialsciences.wordpress.com/2018/09/21/is-everywhere-a-

panopticon/

Wendling, B. M. (2021, January 6). QAnon: What is it and where did it come from? BBC News.

https://www.bbc.com/news/53498434.

“What is a Bot | Cloudflare.” Cloudflare, 2020, www.cloudflare.com/learning/bots/what-is-a-bot.

“What is Automation?” IBM, 2020, www.ibm.com/topics/automation.

Yglesias, M. (2015, May 6). The amazing Jade Helm , explained. Vox.

https://www.vox.com/2015/5/6/8559577/jade-helm-conspiracy.

Zhang, M. “What Was Bentham’s Panopticon? A Computer Model.” YouTube, uploaded by

Myles Zhang, 26 Nov. 2019, www.youtube.com/watch?v=Myal-NSlIGA.

Zappar. “What Is Augmented Reality (AR)??” Zappar, Zappar, 20 Jan. 2018,

https://www.zappar.com/augmented-reality/.

Zhao, G. (2019, June 1). Digital Panopticon: Why Privacy is a Human Right. Medium. Retrieved

November 30, 2020, from https://medium.com/blockchain-at-berkeley/digital-

panopticon-why-privacy-is-a-human-right-2ab6dae77433 Martinez, Mari Page 65 of 65

Zuboff, S. (2020). The age of surveillance capitalism: The fight for a human future at the new

frontier of power. New York, NY: PublicAffairs.