End User Accounts of Dark Patterns as Felt Manipulation

COLIN M. GRAY, JINGLE CHEN, SHRUTHI SAI CHIVUKULA, and LIYANG QU, Purdue University

Manipulation defines many of our experiences as a consumer, including subtle nudges and overt campaigns thatseek to gain our attention and money. With the advent of digital services that can continuously optimize online experiences to favor stakeholder requirements, increasingly designers and developers make use of “dark patterns”—forms of manipulation that prey on human psychology—to encourage certain behaviors and discourage others in ways that present unequal value to the end user. In this paper, we provide an account of end user perceptions of manipulation that builds on and extends notions of dark patterns. We report on the results of a survey of users conducted in English and Mandarin Chinese (n=169), including follow-up interviews from nine survey respondents. We used a card sorting method to support thematic analysis of responses from each cultural context, identifying both qualitatively-supported insights to describe end users’ felt experiences of manipulative products, and a continuum of manipulation. We further support this analysis through a quantitative analysis of survey results and the presentation of vignettes from the interviews. We conclude with implications for future research, considerations for public policy, and guidance on how to further empower and give users autonomy in their experiences with digital services.

CCS Concepts: • Security and privacy → Social aspects of security and privacy; • Human-centered computing → Empirical studies in HCI; Empirical studies in interaction design.

Additional Key Words and Phrases: dark patterns, ethics, manipulation, user experience

Draft: October 20, 2020

1 INTRODUCTION Digital products create and reinforce structures that guide our modern lives, providing a means to connect with others, share our experiences, perform mundane everyday tasks, and purchase goods and services. Modern accounts of these technologies range from the techno-optimistic to the downright dystopian—alternately channeling the power of technology to encourage equality and ease of access or the power of technology to corrupt and undermine individual agency. While modern Science and Technology Studies (STS) scholars point out that society has been shaped by the forces of marketing and advertising since the dawn of the industrial revolution, never before have systems been able to better track consumer behavior (via the “surveillance economy” [57]) and “intelligently” respond with customized and personalized prompts to optimize for monetary gain [31, 34, 45]. This desire for market and consumer control is not new, however; as far back as the 1960s, advertising designers recognized the tensions between contributing to societal good and working as shills for “trivial purposes”: “We do not advocate the abolition of high pressure consumer advertising: this is not feasible. Nor do we want to take any of the fun out of life. But we are proposing a reversal of priorities in favour of the more arXiv:2010.11046v1 [cs.HC] 21 Oct 2020 useful and more lasting forms of communication. We hope that our society will tire of gimmick merchants, status salesmen and hidden persuaders, and that the prior call on our skills will be for worthwhile purposes.” [21] Of course, little has changed in modern advertising, even after this statement—known as the First Things First manifesto—was renewed using similar language in 2000 [2]. Today, slick advertising appeals have been replaced by

Authors’ address: Colin M. Gray, [email protected]; Jingle Chen, [email protected]; Shruthi Sai Chivukula, [email protected]; Liyang Qu, [email protected], Purdue University, 401 N Grant Street, West Lafayette, Indiana, 47907. 1 2 Gray, et al.

“growth hacking” approaches that seek to maximize business and shareholder value by acquiring clients at the lowest cost [7]. This market tendency—when paired with modern use of overtly persuasive and manipulative design patterns—has led to the emergence of “dark UX” and “dark patterns” as key drivers for building and sustaining a customer base [9–11, 26, 38]. These dark patterns are strongly linked to their manipulative counterparts in advertising over the past 50 years, while taking form in increasingly rapid and tailored ways [37]. In this paper, we build upon the concepts of manipulation and persuasion from the perspective of the technology end- user, seeking to supplement accounts of design ethics from designer [8, 38], organizational [24, 41, 42, 44], educational [15, 54, 55], and professional ethics [12, 22] perspectives with a description of how these persuasive systems impact user experiences and point towards felt qualities of manipulation. We seek to connect the discourses of dark patterns with end user experiences, with the goal of better articulating potential uptakes for public policy, user empowerment, and design ethics. We report on the results of a survey study distributed to English and Mandarin Chinese-speaking participants (n=169), augmented by nine follow-up interviews. Through these data collection methods, we sought to elaborate participants’ experiences of digital technologies that they felt were manipulative, including the qualities that caused them to suspect manipulation at varying levels of awareness, the emotions they felt when encountering these manipulative digital products, and the ways in which these participants were aware of the creation and creators of these manipulative experiences. While we did not directly ask participants about their experiences with “dark patterns,” we used the participants’ “felt manipulation” as a proxy to investigate how dark-patterns-informed digital products might be experienced by end users. Building upon our survey and interview results, we offer an end user perspective on dark patterns, building on minimal existing literature from Maier and Harr [33] in this space to identify opportunities to better support user agency through public policy and designer/organizational practices. The contribution of this paper is two-fold: 1) We document the felt experiences of manipulation shared by users of digital products, providing a new perspective on “dark patterns” that facilitates further work that builds upon the intersections of user trust, designer intent, and public policy. 2) We describe users’ perceptions of the creator of these manipulative interfaces or technology systems from multiple perspectives, including the projected role of designer ethics and responsibility in the creation of manipulative technologies.

2 BACKGROUND WORK 2.1 Manipulation in Digital Systems Manipulation has been studied in numerous disciplinary contexts, often building upon and further specifying the conditions by which an act or experience is manipulative—beyond what we already colloquially understand to “feel” or “be” manipulative. From a philosophy perspective, Ackerman [6] contends that for an act to be considered manipulative it must first involve “getting someone to go against what he finds natural or appropriate or is otherwise inclinedtodo.” From a critical discourse perspective, van Dijk [47] further positions manipulation as simultaneously dealing with “a form of social power abuse, cognitive mind control and discursive interaction”—implicating both social dimensions of interaction and a bi-directional act of discursive practice that is shaped “by cognitive and social dimensions.” From a legal perspective, Waldman [49] focuses on aspects of cognitive bias that impact the kinds of strategies that digital products use to manipulate user behaviors. Waldman [49] links these legal concerns to their description of common psychological forms of manipulation that impact users’ decision making processes, including: anchoring (“the disproportionate reliance on the information first available when we make decisions”); framing (“the way in which an opportunity ispresented End User Accounts of Dark Patterns as Felt Manipulation 3 to consumers—namely, either as a good thing or a bad thing”); hyberbolic discounting (“the tendency to overweight the immediate consequences of a decision and to underweight those that will occur in the future”); overchoice (“the problem of having too many choices, which can overwhelm and paralyze consumers”); and metacognitive processes (e.g., “perceiv[ing] difficulty as a signal of importance”). In an HCI and CSCW context, relatively little work has directly addressed manipulation as a construct in relation to digital systems, although such a concept is clearly implicated in the study of restrictive online community practices, disinformation campaigns, and even in mechanisms used to discourage direct action against platforms. Frequently, “nudging” is used to describe a light form of manipulation whereby a system maintains user agency but privileges one type of behavior or outcome over another [51, 52]. As stated by Weinmann et al. [51], “Even simple modifications of the choice environment in which options are presented can influence people’s choices and ‘nudge’ them into behaving in particular ways,” concluding that “there is no neutral way to present choices.” In contrast, other scholars such as Fogg [19] have actively called for the identification and harnessing of principles of persuasion, with the contention that these principles can be used to promote user experiences where behavior modification is desirable (e.g., increasing motivation, reducing addictive behaviors). However, the liminal spaces among manipulation and persuasion are actively in contention, with arguments over the role of user agency, social good, and digital products. Recently, Susser et al. [45] have described the intersection of many of these concerns in relation to manipulation on online platforms, identifying the potential harm and loss of autonomy that these experiences foster. In this paper, we particularly rely upon Susser et al.’s [45] definition of manipulation: “manipulation is hidden influence. Or more fully, manipulating someone means intentionally and covertly influencing their decision-making, by targeting and exploiting their decision-making vulnerabilities.” While in the original source, these authors also seek to distinguish manipulation from persuasion, nudging, and coercion, due to the lack of literature relating user experiences of felt manipulation to existing known dark patterns, we use the term manipulation in a broader sense, while also recognizing the need for more work in this area to connect end user experiences with designer intent. We also rely upon recent work that describes the role of crafty psychological nudging through the incorporation of “dark patterns” [9, 26, 34] and instances where coercion is blatantly described in ways that often force users to make decisions against their own best interest—what Gray and colleagues describe as “asshole designs,” [25] building on a subreddit by the same name.

2.2 Dark Patterns and the Optimization of Shareholder Value A surge of work in the past five years has focused attention on persuasive, manipulative, and coercive practices in the design of digital products, with the moniker of “dark patterns” appearing as the most frequently occurring and compelling label for practitioner and legal discourse around technology ethics. The neologism of dark patterns was coined in 2010 by UX practitioner Harry Brignull, who also holds a PhD in cognitive science. This concept brought together interest in persuasive technologies (e.g., captology [19]), the rise of user experience (UX) and user research as means of better understanding capabilities and needs of users, and the rise in optimization and personalization of systems. Gray et al. [26] summarized and extended a typology of dark patterns strategies in 2018, expanding upon Brignull’s initial definition to identify these patterns as instances where “user value is supplanted in favor ofshareholder value.” While Brignull’s initial typology of dark patterns included mostly context-specific instances11 [ ], largely drawing on examples from e-commerce, the work of Gray et al. [26] shifted this language to derive five strategies that designers can take in order to create manipulative experiences that foreground shareholder value: nagging, obstruction, sneaking, interface interference, and forced action. Gray et al. [25] have recently described connections between dark patterns 4 Gray, et al.

and “asshole design,” identifying the former strategy as a sneaky and surreptitious form of persuasion, while the latter properties reveals themselves in overtly coercive ways. Building on both Brignull and Gray et al., a wide range of work has been published in the last three years by academics and practitioners alike, drawing additional attention on the misuse of knowledge about users to coerce, manipulate, or persuade (e.g., [28, 37]). In the design practitioner space, including conversations about design intent and responsibility [46, 53] and identification of specific contexts where dark patterns have been employed40 [ , 48? ]. In the academic space, scholars have addressed dark patterns in gaming [17, 56], robotics [30], and proxemic sensing [28]. Collections of anti-patterns [32, 35], privacy patterns [20], and bright patterns [23] have continued to expand the dark patterns literature. A newer line of scholarship addresses the incidence of dark patterns in relation to data privacy and security, including investigation of legal issues relating to [23, 27, 31, 39, 43], dark patterns in ecommerce [36? ], and manipulation in mobile applications [18]. Increasingly the notion of dark patterns is being used to connect scholarship across legal, interaction design, user experience, privacy and data protection, and ethics domains—foregrounding the need for users to have agency as they use digital systems. These efforts in describing how dark patterns can result in a lack of transparency or user agency has beenraisedasa space for new policies to be formed. In the European Union, legal action in response to the General Data Protection Regulation (GDPR; [3]) has been leveraged to indicate where dark patterns may limit free and unambiguous consent [27, 43], while similar data protection measures in the United States (e.g., the California Consumer Privacy Act [5]) may yield a similar platform for increased user protection, alongside experimental work by legal scholars (e.g., [31]). The DETOUR act has also been proposed in the United States Senate as a means of banning certain forms of dark patterns [29]. Of particular note in relation to this current paper is the work of Maier and Harr [33], who conducted a set of interviews and focus groups in order to describe users’ experiences and perceptions of dark patterns. This paper concluded that perception related to user impressions, assessment, balance, and acceptability, pointing towards ongoing sensemaking that users engage in to determine if or to what degree they are being manipulated. We explicitly build upon this work, using a survey study with English-speaking and Mandarin Chinese-speaking users to reveal broader patterns and examples of users’ perception of manipulation.

3 OUR APPROACH We used a mixed methods approach to describe end users’ experiences and perception of manipulative technology interfaces, collecting user responses through a survey study and a follow-up interview study with interested survey respondents. We collected data from English-speaking and Mandarin-speaking users, not primarily as a means of “proving” cross-cultural differences, but rather as a way to problematize the notion of manipulation across culturaland geographic boundaries. It is not our ambition to directly compare data arising from these unique contexts, but rather to identify and articulate as broad a set of interpretations of manipulative user interface practices as possible. Through these efforts to describe the felt experiences of users in relation to their daily interactions with technologies, weanswer the following research questions:

(1) What makes digital products feel “manipulative” to end users? (2) What emotions do end users relate to past manipulative experiences with digital products? (3) How aware are the users about the creation and creators of manipulative technologies? End User Accounts of Dark Patterns as Felt Manipulation 5

In this section, we detail our research contexts, data collection methods for the survey and interview studies, and mixed methods data analysis.

3.1 Survey Design We conducted a survey study to identify end users’ daily interaction with manipulative interfaces, emotional reactions to the manipulation, and perception about creators of these technology manipulations. We deployed a 22 question survey that included a mix of multiple choice questions and open ended questions to allow participants to illustrate their felt experiences of manipulation. The sections of the survey aimed to record: agreement to participate in our study, demographic information of the respondent (gender identity, age, profession, country of residence), patterns of daily usage of smartphone application or website on their personal digital devices, open-ended examples of manipulative technology interactions (e.g., “What makes you feel mistrustful of a smartphone application?”), their immediate and lasting emotional reaction to this instance of manipulation (listing negative emotions derived from psychology studies on emotion [1, 16] and negative affect50 [ ]), their perception of being valued as a person or customer (drawn from a study on dark patterns [26]), and their awareness of the creators of manipulative interfaces. We concluded the survey asking respondents if they would be interested in participating in a follow-up interview. As a part of validating our survey protocol, we conducted a pilot study with over 50 participants to receive feedback about the average time taken to complete the survey and the framing of the questions. The survey protocol was then modified based on feedback from the pilot responses and validated in English on Qualtrics by a group of researchers that represented various forms of practitioner and design engagement. After finalizing the protocol in English, the protocol was translated into Mandarin Chinese by a researcher whose native language is Mandarin. Then the translation was reviewed by another native Mandarin-speaking researcher to further improve its accuracy.

3.2 Survey Distribution The final survey was distributed through the personal networks of the research team and through social media, including WeChat and groups about shopping and parenting. Our goal was to target “everyday users” who regularly used digital products but did not have detailed knowledge about the design or development of these products. By finishing the survey, participants had a 1 in 50 chance to win a $10 USD online shopping gift card. For Chinese-speaking participants, we provided them with a 1 in 50 chance to win WeChat currency of the same value (around 70 RMB). After data collection was complete, we downloaded the responses into a spreadsheet for further analysis.

3.3 Summary of Survey Responses In total, we received 253 responses to the survey, including 169 respondents that completed the survey fully (a 65.61% complete response rate) and met our inclusion criteria. Out of 169 completed responses, the distribution of participants geographically was as follows: China (n=103), United States (n=40), United Kingdom (n=9), Canada (n=5), and other (n=13). As our study was not particularly focused on comparative analysis based on geography, we included all English- speaking responses together for further analysis. Gender identity distribution of participants was as follows: female (n=108; 61.7%), male (n=62; 35.6%), and did not disclose (n=4; 2.3%). Age distribution of participants in our data set was as follows: 18-24 yrs (n=19), 25-34 yrs (n=39), 35-44 yrs (n=19), 45-54 yrs (n=58), 55-64 yrs (n= 30), 65-74 yrs (n=2), and did not disclose (n=7). Removing the 7 responses that did not disclose their age, the average age of the participants is 42 years, with a standard deviation of 12.89 years. The survey participants had substantial familiarity with technology; when reporting on the combination of mobile and desktop device use, 82.2% (n=143) of the participants 6 Gray, et al. identified themselves as heavy technology users (between 5-14 hours of use per day) and only 15% (n=26) aslighter technology users (1-3 hours of use per day). The survey respondents were from a range of professions and backgrounds including freelancers, homemakers, students, printers, nurses, nutritionists, money lenders, engineers, software trainers, technicians, attorneys, administrative assistants, therapists, business executives, and CEOs.

3.4 Follow-up Interviews After collecting our survey responses, we conducted a follow-up interview with a subset of American and Chinese participants. We identified this subset by balancing demographic characteristics and experiences with technology in each cultural context, resulting in five Mandarin speakers and four English speakers. Each interview was approximately 15 to 30 minutes in duration, and was conducted by one or two researchers. All interviews were conducted in the native language of the participant. We used a critical interview approach [13], seeking to guide participants to share their point of view and subjective experiences with technology. The first section of the interview was framed by the participant’s survey responses, including more detail on their stories of being manipulated, and what aspect(s) of technology felt manipulative or unduly persuasive to them. In the second section, we focused on the participant’s perception of the designer of manipulative technologies, asking questions such as: “Why do you think someone created this manipulative experience?” and “How do you think the designer knows how to persuade you to act in a certain way?” Participants were encouraged to provide stories and specific experiences to support their answers, and we asked follow-up questions to facilitate the conversation. The interviews were voice-recorded with the permission of the participant and were transcribed for analysis. We used online transcription tools for English speaking participants and manually transcribed recordings of Mandarin speaking participants. We retained the dialogue in Mandarin and provided excerpts in the same dialect to retain the authenticity of the communication and emotions behind user stories.

3.5 Data Analysis Given the different data collection methods, we adopted a mixed-methods analysis approach including: 1) card sorting to analyze survey responses for open-ended questions, answering RQ#1; 2) quantitative analysis to analyze choice or scale-based questions, answering RQ#2 & RQ#3; and 3) case analysis to present vignettes from interview narratives, elaborating RQ#3. We detail each of these analysis procedures in the subsections below.

3.5.1 Card Sorting. The open-ended questions in the survey prompted a range of examples and stories by the partici- pants in both Chinese and English. Having received responses from both multiple contexts and languages, we used card sorting as a means of foregrounding cultural complexity in relation to manipulative technologies. To prepare for the physical card sort activity, we separated 169 survey responses into 609 data units. Each response was printed on a 5 inch square piece of paper, which we used as ‘cards.’ The cards were not separated by language; rather, we mixed them together and treated them equally. This activity was conducted by four researchers: one from the USA, two from China, and one from India. All researchers were trained in qualitative thematic analysis and card sorting methods through previous design and research projects. We conducted a physical card sort as a stand-up activity on a large table, with sufficient room to move around and display as many cards as possible. We collaboratively created categories as each researcher read off their cards, translating if necessary, and discussing our collective interpretation. Wealso occasionally made use of the Translate app to augment the Chinese speakers’ interpretations and further discuss the preliminary themes. After we sorted approximately 100 cards, all members began to have a shared understanding of the categories and felt the categories had been exhausted. The physical activity of sorting cards allowed us to replicate End User Accounts of Dark Patterns as Felt Manipulation 7 the cards wherever required and place them under multiple groups. The physical sorting also allowed us to position the responses in the form of a continuum to represent a sense of progression of manipulation based on the examples shared. We present this spectrum in Section 4.2. The two Mandarin speakers then focused mainly on Chinese responses, while the other researchers categorized most of the English responses and assisted with the grouping of Chinese responses based on Chinese characters that had become familiar due to their frequency. Over a three and a half hour session, this initial round of cross-cultural card sorting allowed us to condense the 609 data sets into 12 preliminary themes, which were later condensed into salient themes based on the “felt ” manipulation of the users’ experiences with digital products to answer RQ #1 in Section 4.1.

3.5.2 Quantitative Analysis. We conducted basic descriptive statistical analysis on the choice or scale-based questions asked in the Survey. We did the following types of statistical analysis to answer RQ#2 & RQ#3: 1) To answer RQ#2 in Section1, we analyzed the frequency of responses to the listed emotions across the scale of being affected “Very Slightly or Not at all” to “Extremely.” We also plotted the most frequently felt emotions by calculating frequencies of “Quite a Bit” and “Extremely” against different age segments; 2) To answer RQ#3 in Section 6.1, we analyzed and plotted the frequencies of users’ awareness of being manipulated across each age segment, including who they blamed for the manipulation; and 3) To answer RQ#3 in Section 6.2, we analyzed and plotted the frequencies of how users felt they were valued in general and without awareness of being manipulated. In the respective sections, we have provided further detail on our calculations.

3.5.3 Case Analysis. We collected vignettes of users’ narratives as they described the creators of manipulation shared through semi-structured interviews. To elaborate further on the participant’s level of awareness (Section 6.1) and their perceptions of the creators of manipulative products Section 6.2, we present three vignettes to describe each end user’s felt experience of manipulation, who they blamed, and the ways in which they wished to respond to the creators of that manipulation. These cases are not meant to be conclusive and representative of the whole data set, but rather as a means of raising examples and reactions to manipulative experiences to frame our findings and point to opportunities for future work.

4 RQ#1: DESCRIBING END USERS’ FELT EXPERIENCES OF MANIPULATION To evaluate the users’ experiences of manipulation in relation to digital products, we first identify what aspects of technological systems may help the user to identify that they are being manipulated (Section 4.1. We then build upon this analysis with a continuum of manipulation with a progression of characteristics of manipulation sorted along a temporal progression, from initial impressions to longer-term user interaction (Section 4.2).

4.1 Perceptions of ‘Manipulation’ First, we seek to describe what aspects of technological systems feel “manipulative” to end users. In the following sections, we describe a range of areas that impacted this felt manipulation, including: distrusting the digital product (n=103), feeling concern about the privacy of their personal information (n=72), knowledge of being tracked without explicit notification as part of a larger “big data” threat (n=51), identifying barriers to feeling secure (n=45), beingaware of explicit manipulation tactics (n=28), and the use of freemium products (n=21).

4.1.1 Overall Perception of Trust or Distrust. In this theme, manipulation is perceived through the lens of who or what users trust and distrust (n=103). Lacking trust in the digital product can lead users to feel as if they are being 8 Gray, et al. manipulated, even if the specific location of this feeling of trust or distrust is less clear. Users from both cultural contexts mentioned a sense of insecurity when interacting with “unfamiliar” digital technologies. In these cases, some users tended to pay more attention to the particular characteristics of manipulative digital technologies (see Section 4.2) that these users may have identified based on their previous experiences, leveraging this knowledge to actively find evidence to prove their assertion that this product is created by “scammers” or “骗子(swindlers)”. However, if the users felt that the digital product sought to build a sense of trust, users were more likely to use information from these product(s) as a reliable information source. In fact, it appears that both Mandarin and English speaking users frequently relied on external information sources to identify the potential of manipulation, using this judgment to impact their overall trust of the digital product. If one or more of these sources suggested that a digital product was manipulative or insecure, the users appeared much more willing to label the digital product as “manipulative software.” Users may certainly trust one source more than the other, but We identified some common information sources that generally appeared in both cultural contexts, including: friends, public broadcasting, reliable software (e.g. browser, anti-virus alerts), and user ratings/reviews from certain trusted platforms. Interestingly, we discovered that a cultural bias may affect the perception of who/what are unreliable; English-speaking participants often used peripheral cues (e.g., “website that is hosted in a different country” and “poor developer documentation”) as a way to avoid potentially manipulative products. In parallel, Chinese participants were unlikely to trust “ bundled software (捆绑软件)”, which refers to instances when related yet unwanted software was installed alongside the desired application on the user’s personal device.

4.1.2 Collection of Personal Information. In this theme, users perceive manipulation in relation to a lack of privacy as they interact with digital products (n=72). When a digital product demands or collects a user’s personal information in an aggressive way, users may feel that the product is challenging their privacy in a manipulative way, complicated by instances where users cannot even comprehend “why [the digital product] need[s] my info.” Responses from users in this theme revealed that users are not always resistant to providing basic information (e.g., name, email address) to a digital product in exchange for convenience or value. However, users expressed the existence of a threshold when a digital product tries to gather “too much or unnecessary” personal information. Often, this threshold was reached when digital products seek to collect information that could be deemed sensitive. Sensitive information mentioned by participants included: a mailing address, contact information, “身份证号码(a personal identification number) or Social Security Number”, and payment-related information. Similarly, asking for an unreasonable level of permissions on an app was also seen as a way that a digital product could demanding personally information “aggressively.” Users were especially skeptical and sensitive when digital applications asked for location and contact list permissions, which tend to disclose more private information about users. Culture also plays a potential role in this theme, which provided us interesting insights about how digital product manipulate users to provide personal information. Some Mandarin speaking participants mentioned a concern about "钓鱼网站(phishing websites)", which often brand themselves to look similar to another trustworthy website and lead users to input sensitive information. English speaking participants also mentioned being wary of a “long user agreement” in which companies often hide unfair use and collection of a user’s information.

4.1.3 Threats of Big Data. Building on the previous theme, users’ concerns about the potential threats of Big Data in relation to privacy concerns and data collection were also a common way that users perceived a manipulative threat (n=51). Instead of emphasizing the collection of information that users deemed “sensitive,” this theme focuses on how users perceive the tracking of their interactions with digital products and how product stakeholders may use these interaction data to achieve business goal without explicitly notifying users in ways that felt manipulative. Although End User Accounts of Dark Patterns as Felt Manipulation 9 users in both cultural contexts were aware that “data theft” exists and that digital products are "窃听后台分析数 据(tracking and analyzing the data in the back end)", participants didn’t characterize their digital footprint in detail or describe how it could be tracked on an abstract level. Instead, our participants provided details about how these interaction data were concretely utilized by products and services. Two main types of uses that felt manipulative were identified: advertisements and search results. Some users mentioned having been shown “ultra-specific advertisement” based on their interests. The display of these advertisements could be based on users’ interaction within the same digital product that displayed the advertisement. However, based on users’ impressions, the knowledge of user’s interest usually came from other digital products, which supported some users’ claims that their “data was sold for business purposes.” Search results were identified to be another common usage of user’s data. For example, results maybe displayed based user’s interests rather than the keyword(s) that the user input, which cause “inaccurate results of search.” Additionally, a Mandarin speaking participants indicated that "搜索结果全部指向百家号(the search results are all related to BaiJiaHao, a media platform owned by the largest Chinese search engine Baidu.)”

4.1.4 Barriers to get security. When users felt their property or information was directly threatened because of using a digital product, users would feel they have been manipulated. We categorized this theme as “Barriers to get security” (n = 45). There were three major security barriers that were identified by participants from both cultural contexts: they are fraud or scam, virus, and hacking. Unfortunately, our respondents didn’t provide us further details for each of these sub themes. However, what we discovered was Mandarin speaking participants brought up more concerns about these security related issues, such as "不良软件(illegal software)" and “黑网站(fake/unreliable website)”.

4.1.5 Awareness of Explicit Manipulation. In this theme, manipulation was identified in relation to an explicit and specific form of user manipulation that participants had experienced (n=28). A small group of participants explicitly listed a specific form of user manipulation “套路(tricks)” or user knowledge that could be used to create manipulative experiences. Our survey responses indicate that several users appeared to be aware of some common manipulation techniques and mechanisms, and when they encountered a similar situation, they could quickly identify these potential forms of explicit manipulation. As one example, a respondent discovered that “the placement of the agree button is often where the enter to continue button is (placed),” reflecting location-based dark patterns that are commonly deployed by developers. Designers rely upon manipulative design techniques such as these to trick, manipulate, or even coerce users into accepting rules or policies that most users may not agree in other cases, since many users often have the tendency to click buttons such as “Next Step” or “Continue” without reading the content carefully.

4.1.6 Use of Freemium Products. In this theme, we identified that some users were especially sensitive to freemium products (n=21), which include digital products or services that require user payment to proceed or maintain access after using the product or service for free. Requirement of payment or "收费(require to pay)" often caused users to view a digital product as potentially manipulative, and even the “possibility of in-game (or in-app) purchase” could result in questioning the trustworthiness of a digital product. This high sensitivity appeared to be related to the lack of direct and precise disclosure of the digital product’s business goal to users, which has been questioned by users in both cultural contexts. Freemium products were reported to be carefully crafted by its creator, with the potential for disruption of the user’s experience used as a strategy to manipulate users to pay. This appeared in the form of “payment traps,” “getting me to pay to unlock digital items,” “使用过程中强制性收费(mandatory fees in the middle of usage),” and “以各种目的让你付钱(using every excuse to let you pay).” Users also felt that this manipulation could be more 10 Gray, et al.

intensely felt when there was a mismatch between user’s expectation and reality: for example, instances where the freemium product was advertised as free to use.

4.2 Temporal Progression of “Felt” Manipulation Through discussion, we discovered that visual and interactive characteristics could contribute greatly to end users’ determination of technologies’ trustworthiness. To investigate this behavior further, we organized the instances along a continuum from initial and general impressions of a product to extended use and interaction with the product (Table1). We report on each element of the continuum in greater detail in the following subsections.

Characteristic Definition Initial Judgment (n=24) Initial negative impression or judgment. Inspection of Details (n=82) Negative impression after inspection of interface details. Felt Persuasion (n=34) Negative impression after felt technology persuasion. General Conclusion (n=14) Conclusion of negative sentiment after considered interpretation of usability and usefulness. Undesired Interaction (n=30) Conclusion of negative sentiment after undesired or unnecessary interactions occur. Negative Results from Conclusion of negative sentiment after user interaction results in unwanted Interaction (n=24) outcomes.

Table 1. Specific characteristics of manipulation (n=208): the top of the table is focused on initial and general impressions, moving towards the bottom which is focused on direct user interactions.

4.2.1 Initial Judgment. General impressions about a technology often relied upon immediate “直觉(intuition)” that the digital product was manipulative, supported by a feeling that “something seemed to be wrong” or judgments that the digital interface “looked unprofessional (看起来不正规)”. Some English-speaking participants augmented this understanding by mentioning “bad or flashy design” as a potential source of early judgment that a digital product might be manipulative.

4.2.2 Inspection of Details. Specific details about how the content was displayed was often the focus of usersafter their initial judgment. Advertising was a dominant focus of attention by users, and based on our analysis, end users in both cultural contexts did not appreciate “advertising driven” products, and the presence of advertising brought additional scrutiny. There were three aspect of ads that were brought up by our participants to identify “advertising driven” interfaces: quantity (e.g., “too many advertisements”), quality (e.g., “sketchy advertisements”), and format (e.g., “intrusive ads” or “pop ups”). Beyond advertisements, other details of a user interface could also raise a red flag for users, such as “whenever it has the option to link to Facebook, Google etc.,” “出现不健康信息和图片” (the appearance of contextually inappropriate content), and “misspellings.” One participant who specifically focused on the more technical aspect of the technology identified that they “read the code” or “monitor[ed] the traffic” as a way of inspecting for manipulation. End User Accounts of Dark Patterns as Felt Manipulation 11

4.2.3 Felt Persuasion. Experiencing and identifying direct persuasion was another determinant for users to not trust a technology. Many people identified that they were extremely careful when they detected “extreme statements” or content with strong biases, such as “news articles that are mainly used to support political opinions,” “obviously twisted and misleading information,” and other similar instances. These forms of content were seen to be “clearly playing on emotions and political leanings” a person may have. “Stories that seem too good to be true” also raised concerns among our participants because people felt that these statements were disingenuous. Another type that was identified was offering incentives for doing something or even nothing. Some websites encouraged users to share their information by promising “free” goods or financial benefits, including “[a] website [that] wanted me to sign up in order tosee the content.” Some sneakier methods of persuasion that are commonly used were also identified by some users; for example, “闭合性的选择” represents a situation where the digital product provided users with limited options which are similar or related to each others. No matter which one the user chose, it would always benefit the creators of that digital product.

4.2.4 General Conclusion. People were also able to evaluate the technology after initial interaction, using their initial judgment alongside this interaction to come to a reasoned conclusion. Instances such as “无用(no direct usage)” and sentiments that the site was “not user friendly” demonstrate the capacity of users to reach a generalized conclusion about the site. Some users also concluded that digital products were not useful to them or had a bad user experience, linking behaviors such as “使用麻烦(complicated to use)” to the potential for manipulation.

4.2.5 Undesired Interactions. After more in-depth engagement with the digital product, users tended to evaluate the desirability of certain interactions in relation to the potential for manipulation. When the experience was ideal, users were able to locate the content they expected to see and could predictably interact with the product. However, if “something undesired happened” when the user was trying to accomplish a specific goal, the user may feel a sense of manipulation. Many participants identified more specific causes for these undesired interactions, which were often designed intentionally to produce unwanted results for users. For instance, one user mentioned “click[ing] on [an] article (and the article) turns into ads", while other users mentioned “不经意跳转各种奇怪的网页(inadvertently jumping to various weird web pages),” “强制跳转其他网站(forcing me to go to other websites)”, and “interrupting ads to buy more things” as undesirable behaviors that resulted in an assessment that the product was being manipulative. Beyond these intentional design choices that pointed towards manipulation, some users stated system errors, such as “bugs” or “glitches,” were another cause for some of their undesired interactions, and could lead to an assessment of quality that intersected with manipulative intention.

4.2.6 Negative Results from Interaction. Some users only realized that they were manipulated after interacting with the technology for a period of time, after experiencing negative consequences or being harmed by interacting with a manipulative user interface. Our participants shared a wide variety of these negative results, including “using lots of data,” 自动下载(downloading [unwanted software] automatically), “影响系统运行(affecting operation system running),” and “影响工作与学习(affecting my work and life).” The “roach motel” was a common pattern that resultedin delayed and negative results; ,any participants listed situations such as “无法轻松删除(unable to delete easily),” “无法 退出(unable to exit),” “opt out pathways were very difficult to find,” and “subscription trap” as examples of manipulation that may take extended use to discover. Finally, some users came to the conclusion that they were being manipulated only after they experienced negative impacts from extended interactions with a digital product. One common strategy that emerged in this extended use 12 Gray, et al. context was gamification, which was used to “encourage” or “nudge” users to perform undesired tasks repetitively for certain benefits—combining the notion of “performing tasks to gain something完 ( 成任务获得一些东西)” and other culturally-specific examples such as “walking for cash走路赚 ( 钱)”—a reference to a famous Chinese “step for cash” mobile fitness application QuBu趣 ( 步), which allowed users to earn cash by achieving a certain amount of steps and inviting friends. English participants also described similar experiences, with one user reporting: “I felt like I had to check the app to keep up.”

5 RQ#2: EMOTIONS RELATED TO MANIPULATIVE EXPERIENCES

(a) Emotional Response (b) Mostly felt emotions vs. Age

Fig. 1. Users’ Emotional Response to Manipulation.

We asked users to rank their felt emotions in relation to past manipulative digital product experiences on a scale from “Very Slightly or Not at all” to “Extremely” using a list of the following negative emotions: Distressed, Upset, Guilty, Scared, Hostile, Irritable, Ashamed, Nervous, Jittery, and Afraid; illustrated in Figure1(a). Users frequently expressed that they felt strong emotions such as being distressed (n=82), upset(n=107), hostile (n=89), and irritable (n=86) when they experienced manipulative products. Users also responded in a negative but more neutral emotions of being scared (n=59), nervous (n=52), jittery (n=62), and afraid (n=56) through the manipulative experience.Users responded least strongly (“Very slightly or Not at all”) to feeling guilty (n=123) or ashamed (n=121) through their manipulative experience, which resonated with a smaller number of users (n=34) blaming themselves for the manipulation (results detailed in Section 6.1). In Figure1(b), we have illustrated the frequency of commonly felt emotions (adding responses for “Quite a bit” and “Extremely”) as represented across different age groups. The bar heights represent that the number of respondents under each age group were uneven, but these initial non-normalized descriptive statistics show how users emotionally responded to felt manipulation, which may point to interesting areas for future work. Within each age group, we observed a somewhat different proportion of users feeling irritable, hostile, or upset; for example, the proportion of users expressing irritability in the age group 18-24 yrs was 55.6% (n=10) while for the age group 45-54 yrs, 27.8% (n=16) of users expressed this emotion. End User Accounts of Dark Patterns as Felt Manipulation 13

6 RQ#3: USERS’ AWARENESS ABOUT THE CREATION AND CREATORS OF MANIPULATION PRODUCTS

Fig. 2. User’s Awareness of Being Manipulated by Age Group.

6.1 Level of Awareness In this section, we describe the users’ awareness of being manipulated, including who they blame or feel is responsible for creating manipulative products, and how often they mistrust the smartphone applications or website they regularly interacting with. When asked if users felt that smartphone applications or websites were designed to manipulate them, 79.3% of the users were aware of being manipulated (‘Yes’: n= 134; ‘No’: n=32; n=3 did not respond). The overall proportion of users responding to this question was relatively stable by age, with more than two-thirds of respondents in each age bracket responding in the affirmative2.

Designer 27

18 28 24

Developer 8 Other Stakeholders 4 32

(a) "Who do you think is to blame?" (b) Who among the creators is to blame?

Fig. 3. Users’ Awareness of the Creators of Manipulative Interfaces.

We also asked the users in a multi-option question who they felt was to blame for the manipulation they experienced in digital products; they could select one or more items from a list including 1)“I am to blame, as the user”; 2) The designer 14 Gray, et al.

of the site or application (e.g., the designer created the application or website to trick me into doing something.); 3) The developer of the site or application (e.g., the developer created the application or website to trick me into doing something.); 4) Other stakeholders (e.g., a company, the company wants to trick me into doing something); 5) Technological Issues (e.g., screen or device quality); and 6) Other, with optional textbox to add to the list. As shown in Figure3(a), most of the users blamed the designer (n=97), other stakeholders (n=92) and developer (n=54) for creating manipulative applications or websites intended to trick them. A minority blamed themselves as the user (n=34), and users rarely blamed feelings of manipulation on technological issues (n=8). Open-ended responses (n=10) included statements such as: “The owner of the application and/or advertiser” and “...it’s not intended to manipulate and has a justifiable reason, but it isn’t communicated well.” To demonstrate the overlaps among these targets of manipulation, we present a Venn diagram in Figure3(b) to describe how users blamed the varying creators (designers, developers and other stakeholders) of smartphone applications and websites. In these overlapping responses, 14.2% (n=24) of the users blamed all of the creators of these digital products—designers, developers and other stakeholders; 71.6% of the users blamed only designers and company stakeholders (n=121); and the least number of users (n=28) blamed only the developer of the application and website.

Fig. 4. Users’ Response to Mistrust of Smartphone Applications or Websites.

Finally, we asked the users how often they mistrust the smartphone applications and websites they regularly interact with. In Figure4, we present the frequencies of these responses on a scale of “rarely,” “sometimes,” “frequently,” and “very frequently.” For smartphone applications, 17.8% (n=30) of the users rarely felt mistrust while 31.4% (n=53) of the users frequently mistrusted these applications. Similarly for websites, 10.7% (n=18) of the users rarely felt mistrust and 36.7% (n=62) users frequently mistrusted the websites. Although the difference is minimal, users appeared to mistrust websites more commonly than smartphone applications. A majority of users (n=86 for smartphone application; n=89 for website) identified that they mistrusted their interactions with these digital products “sometimes,” adcombining the results of “sometimes,” “frequently,” and “very frequently” yielded a majority of responses for both smartphone applications (82.24%; n=139) and websites (89.3%; n=151).

6.2 Creators of Manipulative Digital Products We asked the users how they felt they were valued by the creators of smartphone applications and websites, using a scale from “strongly disagree” to “strongly agree” to two respond to two statements: 1) “I was valued as a person” and 2) “I was valued as a customer.” As shown in Figure5(a), 62.11% of the users (n=100) felt they were valued as a customer with responses of “agree” or higher, with only 27.6% (n=42) feeling they were valued as a person with responses of End User Accounts of Dark Patterns as Felt Manipulation 15

(a) How all users they were felt they were valued? (b) How users not aware of manipulation felt they were valued?

Fig. 5. Users’ Sense of How They Were Valued.

“agree” or higher. 50% of the respondents (n=76) responded with disagree or lower to “I was valued as a person.” To further explore this positioning as a person or as a customer, we also identified how users responded to these two questions if they had previously identified that they were not aware of being manipulated (n=32) in Figure5(b). Even if participants were not aware of the manipulation, they still felt that they were valued more as a customer than a person, representing what appears to be a broad consensus that the creators of digital products treat the users as consumers despite the presence or awareness of explicit manipulative techniques.

6.3 User Perceptions of Digital Product Creators In this final section, we present vignettes from our follow-up interviews with users in order to elaborate onusers’ awareness about the creators of manipulative digital products and relate these experiences to perceptions of manipulation and the progression of these feelings over time (Section 4.2). We present three pseudonymous vignettes below, including Amy (English), Liu (Chinese), and Wang (Chinese).

6.3.1 Vignette 1: Trapped in Auto-Subscription. Amy began her interaction with a digital product by clicking on a Facebook post about a face cream that she later realized was “too good to be true.” At the time, this interaction did not cause her to be aware of any explicit manipulation, but after realizing that she had been auto-subscribed, Amy felt upset because she didn’t “know everything in the beginning.” Upon later reflection, she felt that the creators of this interaction set a trap by hiding necessary information, not allowing her to make the smartest decision. This auto-subscription ended up costing Amy both money and time—identifying that the face cream auto-renewed, and then working to cancel the subscription. Because she did not sense the manipulative intent in her initial “inspection of details,” the consequences of being manipulated were only experienced after felt “negative results from interaction.” After this negative experience, she blamed herself as a user, but also cast blame on the designers of the interaction and other stakeholders responsible for the negative results. She blamed herself for not being smart enough to identify the “trap” and in the future, told us that she would ignore similar social media posts or advertisements that felt “too good to be true” to avoid being “in the same situation.” Relating her experience to the perceived creators of this interaction, she believed that “designers have total control over what [users] see” and as creators, “they have got pressure on them to produce customers, so they have to do whatever they can to make it work.” With these business goals in mind, “[the designers] are not thinking about others as much as they are thinking of themselves.” This manipulation intent—and the 16 Gray, et al.

justification she felt from the designers’ perspective—made her feel that the creators were “valuing [her] forhermoney but not for being a person.” She concluded that these creators don’t have “strong values about right and wrong” and if they had a strong will to serve their customers, “(these creators) couldn’t stay in [their] job if it was always going to be about money and survival.” Amy clearly expressed that the creators had to trade off user and social values for business and monetary benefit.

6.3.2 Vignette 2: Gamified “Stepping for Cash”. Liu has long recognized the gamification of various components in the QuBu fitness application that she uses to “step for cash.” Initially, she thought Qubu was trustworthy because itwas recommended by her friends, but after using the app, Liu discovered she had to “发展下线(market the app to more people)” to keep up with her progress and earn more “糖果(candies—a digital currency within QuBu, which could be used to exchange cash or products).” Later, Liu stopped using QuBu since she was “恐惧(fearful)” that being “过多的参 与进去(too involved into [QuBu])” might get her into trouble, and she concluded that Qubu wasn’t simply a fitness app as they advertised: it was “变相的传销(a variation of multi-level-marketing, which is explicitly prohibited by the Chinese government.)” Similar to Amy, Liu did not sense the manipulative intent in her “general conclusion” early on, and the consequences of being manipulated were only experienced after extended interaction, including “undesired interactions” and “negative results from interaction.” She expressed strong feelings toward Qubu’s creators, saying that its creators are “令人深恶痛绝的(abhorrent)” and she mentioned it was “(可恶) hateful” when designers and stakeholders make business decisions based primarily on “经济利益(financial incentives)”. She stated that manipulative technology creators are“[道德]概念很模糊(ambiguous about ethics)”,and “急功近利的追求财富([these creators] would like to do whatever they can to pursue financial success in a short time frame.) ” Therefore, they were willing to enter “gray areas” of the law and take advantages of spaces where “中国的很多制度还不健全(there isn’t a lot of regulation in China [for technology manipulation.])” On the other hand, Liu blamed herself for not being careful—because it was she who wanted to use the application and “想赚钱(make money [through Qubu])” in the first place. She elaborates this assessment, noting that the creators of QuBu “没偷没抢(did not break any laws)”; therefore, Liu had trouble in assigning full blame to Qubu’s creators for manipulating her because “ 很难界定(it was hard to decide from what perspective to evaluate [them.])”

6.3.3 Vignette 3: Forced Collection of Personal Information. Wang has been concerned about enabling phone permissions for years due to his fears of being tracked and potentially manipulated by others. He shared an experience about his use of a vehicle rental application, where he had to allow several smartphone permissions that he felt were unnecessary. Wang thought about using a similar application as an alternative; however, in that situation, the vehicle rental app that required these permissions was his only option. Therefore, Wang felt he was “被逼无奈(helpless and forced to [enable permissions without a choice.])” Wang further expressed that he was frequently concerned about apps having access to his personal data, because he worried about “信息泄露([apps] leaking his information)” and “有人把它收集 起来对我做个画像...那么可能就会有损我的个人信息安全(intentional data collection, which can be used to create a behavior model and harm my personal information security.)” Because Wang sensed the manipulative intent through an “undesired interaction,” his feelings of being manipulated were heightened, even though he had no option to use a competing app or service. Wang blamed technology designers and stakeholders for this experience that he felt was manipulative. He pointed toward the issue of the creator’s ethical awareness, further illustrating his perception by citing a Chinese proverb: “不为 五斗米折腰(don’t curtsy for the salary of five bushels of rice; meaning that people should not lose their dignity because of money and do something that’s ethically wrong.)” He felt that these creators were “短视的(short sighted)” because End User Accounts of Dark Patterns as Felt Manipulation 17 they focused on “蒙一下是一下(tricking as many people as they can)” as an exchange of monetary incentives. If these creators wanted to build an “百年老店(evergreen brand)”, Wang claimed that the company would be able to “把服务持 续的提供给大家(provide [high-quality] services to customers consistently.)” Additionally, compared with the ethical responsibility of designers, Wang thought that stakeholders should be more responsible to disincentive technology manipulation. He believed that “设计师想把产品做的更好(designers wanted to create better products)”, but they could “受资本的驱动(be under the stress of capitalists.)” In conclusion, he felt that designers and developers had the responsibility to “平衡(balance)” stakeholders’ desires and users’ rights to reach “互相说服(mutual agreement.)”

7 DISCUSSION 7.1 Linking Experiences of Manipulation to Dark Patterns Strategies While we did not directly ask participants about their experiences with “dark patterns”—we did use “felt manipulation” as a proxy to investigate how dark-patterns-informed digital products might be experienced by end users. In this subsection, we seek to link our findings to broader notions of dark patterns in the literature, noting where dark patterns strategies align to users’ perceptions of felt manipulation and experiences of manipulation over time. First, we draw together the perceptions of manipulation we shared in Section 4.1 with dark patterns literature and the previous end user findings of Maier and Harr [33] (e.g., notions of perception that include impressions, assessment, balance, and acceptability). The majority of the issues that drove users’ perception of felt manipulation included spaces where users had to engage in an assessment of a context or experience, including the connection of broader themes of past personal and societal experiences and the desire for personal control. The examples we have collected from participants demonstrates this ongoing assessment of digital experiences and balancing of value versus the perceived threat to decide whether to more fully engage and use the digital product. Threats that emerged from the users’ assessment included data collection and use at personal and “big data” scales (collection of personal information, threats of big data); an overall perception that resulted from a contextually aware judgment that included context, user experiences or reviews, and degree of familiarity (overall perception of trust and distrust); and the use of products whose functionality and risk assessment may change over time due to monetization strategies (use of freemium products). The users’ engagement with different forms of assessment and balancing of user needs and digital product capabilities and risks builds upon these end user perceptions as defined by Maier and Harr33 [ ], while also providing additional points of departure by which these assessments and decisions might be made. Second, we compare the emergence of felt manipulation we shared in Section 4.2 with known dark patterns strategies from Gray et al. [26] to better identify where awareness of different strategies might emerge along this temporal progression. Based on our card sort, we identified 24 instances of felt manipulation as an initial judgment, while130 instances of felt manipulation occurred after initial judgment but prior to any user interaction (including inspection of details, felt persuasion, and general conclusion). This finding points towards broad awareness that something is “off” or “not correct,” but often lacking more precise language to describe what is driving the feeling thattheuseris being manipulated. This aligns well with dark patterns strategies such as sneaking and interface interference, where user behaviors are being nudged, but in ways that are intended to be largely undetectable without further investigation. Other dark patterns strategies such as obstruction or forced action might only be detected as potentially manipulative once a user engages in an interaction and receives an negative result (n=24) or when a user experiences an undesired or unnecessary interaction (n=30). Similarly, the dark pattern strategy of nagging could only be experienced over time, resulting in impacts on only the latter half of the manipulative continuum (Table1). What this analysis and comparison 18 Gray, et al. with dark patterns strategies begins to reveal in greater detail is the role of temporality, awareness, and translation of emotions to designer intent that occurs rapidly in an end user’s engagement with technology. While end users may not have sufficient language to describe manipulation in an overarching and conceptual sense—such as the driving language of “dark patterns”—they nevertheless are able to identify a range of issues that cause them to suspect manipulative intent, and can share examples of this manipulation that extend across the entire temporal user journey. This finding aligns with Maier and Harr [33], positioning the user impressions, assessment, and identification of acceptability as key factors in deciding whether to use a system (if such a choice is possible), or whether detected manipulative intent warrants further scrutiny, care, or protection.

7.2 Felt Manipulation as a Potential Platform for User Advocacy So what should we do about the finding that 82.24% of users mistrusted smartphone applications and 89.3% ofusers mistrusted websites at least “sometimes”? This finding aligns with Di Geronimo et18 al.[ ] in the context of smartphone applications, where they found that 95% of the apps they evaluated exhibited the presence of one or more dark patterns— but what recourse do users have, even if they are able to monitor their experiences and identify sources and points of manipulation? We propose the use of our initial frameworks of manipulative perception and temporal progression of felt manipulation to be an important first step in empowering users to take control of their digital experiences. To name something—assigning it the label of “manipulative,” “dark,” “sneaky,” or downright evil—is an important first step of awareness that is necessary to lead to future advocacy efforts. While many of the techniques that users identified are not strictly illegal in most regions of the world, these problematic practices can be made illegal with the efforts of a wide range of individuals, including users themselves. As has become clear in the wake of GDPR, end users can be given rights that allow them to exercise control over parts of their digital experience (cf., Recital 59 [4]). While many of the processes by which users can exercise these rights are still under active construction or negotiation, this might provide one way to envision the ability of users to collectively identify and characterize certain digital products—or strategies contained within digital products—as suspect, potentially manipulative, or illegal, knowing that many companies might not have interest in changing their practices on their own. While we have identified multiple sources of “felt manipulation,” advocacy might be taken up along multiple different lines. One possible direction is simply the “mainstreamification” of the term “dark patterns,” allowing users to co-opt the evolving language proposed by a range of HCI and design scholars [14, 26, 28, 34, 37]. This strategy may succeed along some dimensions, raising awareness of manipulative strategies built into technologies, while also perhaps leading to a watering down of what is truly a psychological trick [10] or an imbalance of shareholder and user value [26]. Even with crowdsourced additions to sites that collect dark patterns exemplars (@darkpatterns on Twitter1; a searchable set of examples from Gray and colleagues2; discussions the “r/assholedesign” subreddit3 [25]; a community to find “healthy mobile games”4), it is yet unclear whether merely collecting and voting on manipulative exemplars necessarily leads to feelings of empowerment and advocacy. Another possible direction may be a more lightweight means of awareness, perhaps focusing on different set of language that may be taken up more colloquially—perhaps as an instantiation ofamental model of persuasion, addiction, or manipulation that is increasingly part of the popular discourse around technology ethics (cf., the problematic yet popular documentary The Social Dilemma5). A shift around language might foreground

1https://twitter.com/darkpatterns 2https://darkpatterns.uxp2.com/patterns/ 3https://www.reddit.com/r/assholedesign/ 4https://www.darkpattern.games 5https://www.thesocialdilemma.com End User Accounts of Dark Patterns as Felt Manipulation 19 capitalist goals as being focused on persuasion (in the best of cases), manipulation (in typical cases), or coercion (in cases where there are no alternative options), allowing users a more expanded vocabulary to link their feelings of unease with possible design intentions.

8 FUTURE WORK Our findings and discussion point towards numerous areas for future work, which we can only outline in thebroadest possible way here. Because the intersection of dark patterns, design intent, and end user perceptions has been historically understudied, there are numerous research directions that could focus on: 1) characterizing end user emotions and sensitivity when perceiving “dark” UX experiences; 2) identifying factors that contribute to less or more sensitivity to manipulative strategies through broad-scale experimental and survey research, including factors such as age, profession, technology experience, and ethical training; 3) means of qualitatively assessing and identifying through user testing or evaluation as part of a product lifecycle; 4) describing specific aspects of felt manipulation both theoretically and conceptually; 5) identifying opportunities for advocacy in identifying problematic strategies and contributing to policy development, potentially through participatory or crowdsourced means; 6) assessing and creating new supports for design students and practitioners to better understand how and where felt manipulative intent might emerge; and 7) a convergent and transdisciplinary description of what factors are likely to contribute to a holistic assessment of manipulation from multiple disciplinary and stakeholder perspectives. To demonstrate these opportunities for future work further, we will elaborate a couple of these directions further. For example, more research on describing specific facets of felt manipulation could build on our findings and thework of Di Geronimo et al. [18], based on the emergent insight that greater user sensitivity to the presence of dark patterns may lead to greater success in users detecting patterns in future digital product experiences. Similar and more extensive experimental assessments of specific dark patterns across specific moments in the perception continuum —from initial judgment to extended interaction—might reveal spaces where users are already well primed to identify manipulative practices, perhaps leveraging the early experimental work of Luguri and Strahilevitz [31] that focuses on sensitivity and potential harm of individual dark patterns strategies. As an alternate example, future work on connecting user awareness with public policy and governance may facilitate the translation of user perception to actionable legal recourse, similar to the data processing governance structure currently in place to support GDPR. Work in this area might combine sensitization towards manipulative experiences, documented above, with platforms to organize as a collective action campaign, potentially using insights from digital civics, participatory design, and crowdwork to identify problematic strategies or products and argue for their change or discontinuance.

9 CONCLUSION In this paper, we have reported on the results of a survey and followup interviews conducted with English and Mandarin Chinese-speaking participants, further documenting the felt characteristics of digital product experiences that relate to manipulation and the presence of dark patterns. We described a range of perception characteristics that lead to an assessment of manipulation, including a temporal progression of these experiential characteristics that reinforce a judgment of manipulation across initial perception through extended interaction. We built upon these perceptions of manipulation to reveal common emotions experienced by end users, and the ways in which these users described the designer intent of these systems and the individuals they hold responsible for manipulative experiences. Based on these findings, we offer additional opportunities to intersect research on dark patterns and policy with these assessments of manipulation, providing new avenues for supporting user advocacy and agency. 20 Gray, et al.

ACKNOWLEDGMENTS We gratefully acknowledge the efforts of Madison Fansher in developing a pilot version of the survey, later expanded with the help of undergraduate researcher Lucca McKay. We also thank Zexi (Jessie) Zhou for her help in translating the survey and interviews. This work is funded in part by the National Science Foundation under Grant Nos. 1657310 and 1909714.

REFERENCES [1] [n.d.]. Feelings Wheel. http://feelingswheel.com/. Accessed: 2020-10-13. [2] [n.d.]. First Things First Manifesto 2000. http://www.eyemagazine.com/feature/article/first-things-first-manifesto-2000. http://www.eyemagazine. com/feature/article/first-things-first-manifesto-2000 Accessed: 2020-10-10. [3] [n.d.]. General Data Protection Regulation (GDPR) – Official Legal Text. https://gdpr-info.eu/. https://gdpr-info.eu/ Accessed: 2020-9-19. [4] [n.d.]. Recital 59 - Procedures for the Exercise of the Rights of the Data Subjects. https://gdpr-info.eu/recitals/no-59/. https://gdpr-info.eu/recitals/no- 59/ Accessed: 2020-10-15. [5] 2018. California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa. https://oag.ca.gov/privacy/ccpa Accessed: 2020-9-19. [6] Felicia Ackerman. 1995. The concept of manipulativeness. Philosophical Perspectives 9 (1995), 335–340. [7] René Bohnsack and Meike Malena Liesner. 2019. What the hack? A growth hacking taxonomy and practical applications for firms. Business horizons 62, 6 (Nov. 2019), 799–818. https://doi.org/10.1016/j.bushor.2019.09.001 [8] Cennydd Bowles. 2018. Future Ethics. Nownext Press. https://market.android.com/details?id=book-fAy_vAEACAAJ [9] Harry Brignull. 2011. Dark Patterns: Deception vs. Honesty in UI Design. Interaction Design, Usability 338 (2011). [10] Harry Brignull. 2013. Dark Patterns: inside the interfaces designed to trick you. The Verge (2013). [11] Harry Brignull, Marc Miquel, Jeremy Rosenberg, and James Offer. 2015. Dark Patterns-User Interfaces Designed to Trick People. [12] Peter Buwert. 2018. Examining the Professional Codes of Design Organisations. In Proceedings of the Design Research Society. https://doi.org/10. 21606/dma.2017.493 [13] P F Carspecken. 1996. Critical ethnography in educational research: A theoretical and practical guide. Routledge, New York. [14] Michael Chromik, Malin Eiband, Sarah Theres Völkel, and Daniel Buschek. 2019. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems. In IUI Workshops. medien.ifi.lmu.de. [15] Devin Coldewey. 2018. Students confront the unethical side of tech in ‘Designing for Evil’ course. TechCrunch (May 2018). http://techcrunch.com/ 2018/05/29/students-confront-the-unethical-side-of-tech-in-designing-for-evil-course/ [16] Alan S Cowen and Dacher Keltner. 2017. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proc. Natl. Acad. Sci. U. S. A. 114, 38 (Sept. 2017), E7900–E7909. https://doi.org/10.1073/pnas.1702247114 [17] Christoph Sebastian Deterding, Jaakko Stenros, and Markus Montola. 2020. Against“ Dark Game Design Patterns”. In DiGRA’20-Abstract Proceedings of the 2020 DiGRA International Conference. eprints.whiterose.ac.uk. [18] Linda Di Geronimo, Larissa Braz, Enrico Fregnan, Fabio Palomba, and Alberto Bacchelli. 2020. UI Dark Patterns and Where to Find Them: A Study on Mobile Applications and User Perception. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376600 [19] B J Fogg. 2003. Persuasive Technology: Using Computers to Change What We Think and Do. 1–282 pages. https://doi.org/10.1016/B978-1-55860-643- 2.X5000-8 [20] Lothar Fritsch. 2017. Privacy dark patterns in identity management. In Open Identity Summit (OID), 5-6 october 2017, Karlstad, Sweden. Gesellschaft für Informatik, 93–104. [21] Ken Garland. 1964. First things first manifesto. http://www.designishistory.com/1960/first-things-first/. http://www.designishistory.com/1960/first- things-first/ [22] D W Gotterbarn, Bo Brinkman, Catherine Flick, Michael S Kirkpatrick, Keith Miller, Kate Vazansky, and Marty J Wolf. 2018. ACM code of ethics and professional conduct. (2018). https://dora.dmu.ac.uk/handle/2086/16422 [23] Paul Grassl, Hanna Schraffenberger, Frederik Zuiderveen Borgesius, and Moniek Buijzen. 2020. Dark and bright patterns in cookie consent requests. (July 2020). https://doi.org/10.31234/osf.io/gqs5h [24] Colin M Gray and Shruthi Sai Chivukula. 2019. Ethical Mediation in UX Practice. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19. ACM Press. https://doi.org/10.1145/3290605.3300408 [25] Colin M Gray, Shruthi Sai Chivukula, and Ahreum Lee. 2020. What Kind of Work Do “Asshole Designers” Create? Describing Properties of Ethical Concern on Reddit. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS ’20). Association for Computing Machinery, New York, NY, USA, 61–73. https://doi.org/10.1145/3357236.3395486 [26] Colin M Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L Toombs. 2018. The Dark (Patterns) Side of UX Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). dl.acm.org, New York, NY, USA, 534:1–534:14. https://doi.org/10.1145/3173574.3174108 End User Accounts of Dark Patterns as Felt Manipulation 21

[27] Colin M Gray, Cristiana Santos, Nataliia Bielova, Michael Toth, and Damian Clifford. 2020. Dark Patterns and the Legal Requirements of Consent Banners: An Interaction Criticism Perspective. arXiv:2009.10194 [cs.HC] http://arxiv.org/abs/2009.10194 [28] Saul Greenberg, Sebastian Boring, Jo Vermeulen, and Jakub Dostal. 2014. Dark Patterns in Proxemic Interactions: A Critical Perspective. In Proceedings of the 2014 Conference on Designing Interactive Systems (Vancouver, BC, Canada) (DIS ’14). ACM, New York, NY, USA, 523–532. https://doi.org/10.1145/2598510.2598541 [29] Makena Kelly. 2019. Big Tech’s ‘dark patterns’ could be outlawed under new Senate bill. https://www.theverge.com/2019/4/9/18302199/big-tech- dark-patterns-senate-bill-detour-act-facebook-google-amazon-twitter. https://www.theverge.com/2019/4/9/18302199/big-tech-dark-patterns- senate-bill-detour-act-facebook-google-amazon-twitter Accessed: 2020-9-19. [30] C Lacey and C Caudwell. 2019. Cuteness as a ‘Dark Pattern’ in Home Robots. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 374–381. https://doi.org/10.1109/HRI.2019.8673274 [31] Jamie Luguri and Lior Strahilevitz. 2019. Shining a Light on Dark Patterns. (Aug. 2019). https://doi.org/10.2139/ssrn.3431205 [32] Diana MacDonald. 2019. Anti-patterns and dark patterns. In Practical UI Patterns for Design Systems: Fast-Track Interaction Design for a Seamless User Experience, Diana MacDonald (Ed.). Apress, Berkeley, CA, 193–221. https://doi.org/10.1007/978-1-4842-4938-3_5 [33] Maximilian Maier and Rikard Harr. 2020. Dark Design Patterns: An End-User Perspective. Human Technology 16, 2 (2020), 170–199. https: //doi.org/10.17011/ht/urn.202008245641 [34] Arunesh Mathur, Gunes Acar, Michael J Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. (July 2019). arXiv:1907.07032 [cs.HC] [35] Alexander Mirnig and Manfred Tscheligi. 2017. (Don’t) Join the Dark Side: An Initial Analysis and Classification of Regular, Anti-, and Dark Patterns. In PATTERNS 2017: Proceedings of the 9th International Conference on Pervasive Patterns and Applications. 65–71. [36] Carol Moser. 2020. Impulse Buying: Designing for Self-Control with E-commerce. Ph.D. Dissertation. University of Michigal. [37] Arvind Narayanan, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar. 2020. Dark Patterns: Past, Present, and Future. Queueing Systems. Theory and Applications 18, 2 (April 2020), 67–92. https://doi.org/10.1145/3400899.3400901 [38] Chris Nodder. 2013. Evil by Design: Interaction Design to Lead Us into Temptation. John Wiley & Sons, Inc., Indianapolis, IN. 303 pages. https: //market.android.com/details?id=book-46Wl1G9yJUoC [39] Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, and Lalana Kagal. 2020. Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376321 [40] Brandon Read. 2018. TurboTax: a critical analysis & UX design teardown - UX Collective. https://uxdesign.cc/turbotax-design-1a37356adc61. https://uxdesign.cc/turbotax-design-1a37356adc61 Accessed: 2020-10-10. [41] Katie Shilton. 2013. Values Levers: Building Ethics into Design. Science, technology & human values 38, 3 (May 2013), 374–397. https://doi.org/10. 1177/0162243912436985 [42] Katie Shilton. 2018. Engaging Values Despite Neutrality: Challenges and Approaches to Values Reflection during the Design of Internet Infrastructure. Science, technology & human values 43, 2 (March 2018), 016224391771486. https://doi.org/10.1177/0162243917714869 [43] Than Htut Soe, Oda Elise Nordberg, Frode Guribye, and Marija Slavkovik. 2020. Circumvention by design—dark patterns in cookie for online news outlets. (June 2020). arXiv:2006.13985 [cs.HC] http://arxiv.org/abs/2006.13985 [44] Marc Steen. 2015. Upon Opening the Black Box and Finding It Full: Exploring the Ethics in Design Practices. Science, technology & human values 40, 3 (May 2015), 389–420. https://doi.org/10.1177/0162243914547645 [45] Daniel Susser, Beate Roessler, and Helen Nissenbaum. 2019. Technology, autonomy, and manipulation. Internet Policy Review 8, 2 (2019). https://doi.org/10.14763/2019.2.1410 [46] Benjamin Tollady. 2016. Under the influence: Dark patterns and the power of persuasive design. https://uxmastery.com/dark-patterns-and-the- power-of-persuasive-design/. [47] Teun A Van Dijk. 2006. Discourse and manipulation. Discourse & society 17, 3 (2006), 359–383. [48] Alyssa Vance. 2016. Dark Patterns by the Boston Globe. https://rationalconspiracy.com/2016/04/24/dark-patterns-by-the-boston-globe/. [49] Ari Ezra Waldman. 2020. Cognitive biases, dark patterns, and the ’privacy paradox’. Current opinion in psychology 31 (Feb. 2020), 105–109. https://doi.org/10.1016/j.copsyc.2019.08.025 [50] David Watson, Lee Anna Clark, and Auke Tellegen. 1988. Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of personality and social psychology 54, 6 (1988), 1063–1070. https://doi.org/10.1037/0022-3514.54.6.1063 [51] Markus Weinmann, Christoph Schneider, and Jan Vom Brocke. 2016. Digital Nudging. Business & Information Systems Engineering 58, 6 (Dec. 2016), 433–436. https://doi.org/10.1007/s12599-016-0453-1 [52] T Martin Wilkinson. 2013. Nudging and manipulation. Political Studies 61, 2 (2013), 341–355. [53] Henry Wong. 2020. Inside the deceptive design world of dark patterns. https://www.designweek.co.uk/issues/9-15-march-2020/dark-patterns-design/. Accessed: 2020-3-10. [54] Richmond Y Wong and Deirdre K Mulligan. 2019. Bringing Design to the Privacy Table: Broadening “Design” in “Privacy by Design” Through the Lens of HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM Press, New York, NY. https://doi.org/10.1145/3290605.3300492 22 Gray, et al.

[55] Richmond Y Wong, Deirdre K Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (Dec. 2017), Article No. 111. https://doi.org/10. 1145/3134746 [56] José P Zagal, Staffan Björk, and Chris Lewis. 2013. Dark Patterns in the Design ofGames.In Foundations of Digital Games 2013. diva-portal.org. [57] Shoshana Zuboff. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Vol. 4. PublicAffairs.