LocalWiki: A Review of Genre Ecology Instability across Classes of Participants

by

Michael R. Trice, M.A.

A Dissertation

In

Technical Communication and Rhetoric

Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of

DOCTOR OF PHILOSOPHY

Approved

Dr. Brian Still Chair of Committee

Dr. Rich Rice

Dr. Liza Potts

Mark Sheridan Dean of the Graduate School

August, 2019

Copyright 2019, Michael R. Trice Texas Tech University, Michael R. Trice, August 2019

ACKNOWLEDGMENTS How far back does one to go to write acknowledgements of this magnitude? It would be far less than hyperbole to list every mentor and teacher since grade school— and family, well, I come from a large family. Let me simply start by saying that I know and recognize the impossible network of people, opportunity, and affordance than allowed this task to be completed.

More specifically, let me start with my committee. Dr. Still’s patience and assertiveness have no doubt been the driving external forces for completion of this dissertation. More importantly, however, his introduction of the formal application of

Usability Studies was eye opening in 2010. The ability to blend qualitative and quantitative approaches in considering the power of networks as a matter of genre and agency filled so many gaps in my early fumbling to blend philosophy and rigorous research. His commitment to rigor and his understanding that sometimes you make the right choices for your family first have been instrumental in shaping my last, well, nine years. I want to thank Dr. Rice for frequent interest and check-ins, but also his meticulous care in calling out the weakness of vital arguments within the dissertation.

I also want to thank Dr. Potts for the pressure to finish this task and reminding me that one cannot race too far forward without resolving what rests in the past. I also want to thank Dr. Cargile Cook for support in framing this dissertation and initial feedback on some of the most crucial chapters.

I want to thank the entire TCR department at Texas Tech for allowing me to take my interest in theory and demonstrating how to ground it within methods and for providing a home to those of us who had long spent time in industry to return to the

ii Texas Tech University, Michael R. Trice, August 2019 academic institution in a way that made the most of our professional experience. I most certainly want to thank all my fellow students in the TCR program for all their support, most especially Charity Tran, Dr. Andrea Beaudin, Dr. Jeremy Huston, Dr.

Jeannie Bennet, Dr. Chris Andrews, and—what’s his name; the bald one—oh, yes, Dr.

Peter England. I also want to thank my current home of WRAP at MIT. The support of

Dr. Lane, Dr. Karatsolis, and Dr. Stickgold-Sarah (that is weird to type) were beyond instrumental to both finishing this dissertation and helping me grow as a scholar over the last six years. I should also thank the Hambo Institute for keeping me sane and entertained. The same for all the tables that came before them. I also want to thank the

LocalWiki founders for their time, patience, and assistance. Thank you, Mike Ivanov and Philip Neustrom.

More than anything I owe a huge debt to my family. All four kids paid a particular price so that I could finish this task, and I thank them for their support— even when the older three sometimes forget this task hasn’t been completed yet. I, of course, more than anything else, thank Shannon. The work she has done and the support she has provided goes well beyond what could possibly have been expected.

This step would not have been possible without her love and grace.

Finally, I want to thank my moms and sister. Sherry Trice and Sharon

Huddleston-Cheatham, I hope I’ve done you both proud. Tracy, I will always love and miss you and whenever it seemed too hard, I remembered your strength and simply tried to have half of that.

iii Texas Tech University, Michael R. Trice, August 2019

TABLE OF CONTENTS ACKNOWLEDGMENTS ...... ii

ABSTRACT ...... viii

LIST OF TABLES ...... x

LIST OF FIGURES ...... xi

I. INTRODUCTION ...... 1

1.1 From DavisWiki to LocalWiki to LocalWiki Denton ...... 3 1.2. Why a LocalWiki? ...... 6 1.3 Participant Resistance ...... 9 1.4 Central Questions ...... 10 1.5 The Approach ...... 12 1.6 A Word on the Timing of Data Collection ...... 14 1.7 Overview of Chapters ...... 14 1.7.1 Chapter 2: Literature Review ...... 15 1.7.2 Chapter 3: Methods ...... 15 1.7.3 Chapter 4: Interview Results ...... 16 1.7.4 Chapter 5: Usability Testing Results ...... 16 1.7.5 Chapter 6: Analysis of Participant Perceptions and Behaviors ...... 17 1.7.6 Chapter 7: Discussion ...... 17 II. REVIEW OF THE LITERATURE ...... 19

2.1 Platform Governance and Participatory Culture ...... 20 2.2 Technical Communication and the ...... 27 2.3 Platform and Participant as Genre Actors ...... 32 2.4 Toward a Method of Tools, Convention, and People ...... 36 2.4.1 Interviews ...... 37 2.4.2 Usability ...... 37 2.4.3 Mapping ...... 39 III. METHODS ...... 43

3.1 Examining the Parallel Role of Interviews and Usability in Examining the LocalWiki Denton ...... 46 3.2 Call for Participants ...... 49 3.2.1 Participants ...... 49 iv Texas Tech University, Michael R. Trice, August 2019

3.2.2 Selection ...... 51 3.2.3 Order of Participation ...... 52 3.3 Phase One: Interviews ...... 53 3.3.1 Question Design ...... 53 3.3.2 Interview Process ...... 55 3.3.3 Interview Analysis ...... 55 3.4 Phase Two: Usability Testing ...... 56 3.4.1 Why Use SUS ...... 57 3.4.2 How Many Participants ...... 59 3.4.3 Testing process ...... 61 3.4.4 Evaluation Process ...... 62 IV. TURNING INTERVIEWS INTO TASKS ...... 66

4.1 Questions ...... 67 4.1.2 New Participant Questions ...... 67 4.1.2 Experienced Participants Questions ...... 68 4.1.3 Design Participants Questions ...... 68 4.2 Population ...... 70 4.2.1 New Participants ...... 71 4.2.2 Experienced Participants ...... 71 4.2.3 Design Participants ...... 71 4.3 Results ...... 72 4.3.1 New Participants ...... 72 4.3.2 Experienced Participants ...... 73 4.3.3 Design participants ...... 75 4.4 Building the Tasks ...... 77 4.4.1 Task One: Find the Hours for Open Mic Night at The Garage [New] ...... 77 4.4.2 Task Two: Change the Date Found on “The Garage” Page [Experienced] ...... 79 4.4.3 Task Three: Add a Link from “The Garage” Page to any Other Page [Experienced] ...... 80 4.4.4 Task Four: Find the Businesses on The Square [New] ...... 81 4.4.5 Task Five: Add a New Page for a Location to the Wiki (Can be a Fake Location) [Design] ...... 82 4.4.6 Task Six: Address Any Concerns You Have with the Cranky Goose Page [Design] ...... 83 V. USABILITY TESTING RESULTS ...... 86

5.1 Defining the Groups ...... 88 v Texas Tech University, Michael R. Trice, August 2019

5.2 Refining the Results ...... 90 5.3 Defining the Tasks ...... 92 5.3.1 Task One: Find the Hours for Open Mic Night at the Garage [New] ...... 93 5.3.2 Task Two: Change the Date Found on The Garage Page [Experienced] ...... 93 5.3.3 Task Three: Add a Link from The Garage Page to any Other Page [Experienced] ...... 94 5.3.4 Task Four: Find the Businesses on The Square [New] ...... 95 5.3.5 Task Five: Add a New Page for a Location to the Wiki (Can be a Fake Location) [Design] ...... 96 5.3.6 Task Six: Address Any Concerns You Have with the Cranky Goose Page [Design] ...... 97 5.4 Performance Overview by Participant Groups ...... 98 5.4.1 Task Performance ...... 98 5.4.2 SUS Scores ...... 104 5.4.3 Post-task Surveys ...... 105 5.5 Performance in Context ...... 110 5.5.1 Qualitative Metrics for Task One ...... 110 5.5.2 Qualitative Metrics for Task Two ...... 113 5.5.3 Qualitative Metrics for Task Three ...... 115 5.5.4 Qualitative Metrics for Task Four ...... 118 5.5.5 Qualitative Metrics for Task Five ...... 119 5.5.6 Qualitative Metrics for Task Six ...... 122 VI. AN ANALYSIS OF THE DENTON LOCALWIKI ...... 125

6.1 Review of Participant Classes ...... 125 6.1.1 New Participants ...... 126 6.1.2 Review of Experienced Participants ...... 131 6.1.3 Design Participants ...... 136 6.1.4 Reviewing the Trends ...... 140 6.2 Reviewing the Tasks ...... 142 6.2.1 Task One: Find the Hours for Open Mic Night at the Garage ...... 143 6.2.2 Task Two: Change the Date Found on The Garage Page ...... 144 6.2.3 Task Three: Add a Link from The Garage Page to any Other Page ...... 145 6.2.4 Task Four: Find the Businesses on The Square ...... 146 6.2.5 Task Five: Add a New Page for a Location to the Wiki (Can be a Fake Location) ...... 148 6.2.6 Task Six: Address Any Concerns You Have with the Cranky Goose Page ...... 150 vi Texas Tech University, Michael R. Trice, August 2019

6.2.7 System Review ...... 153 6.3 Returning to the Research Question ...... 155 6.3.1 New Participant Maps ...... 156 6.3.2 Experienced Participant Maps ...... 159 6.3.3 Design Participant Maps ...... 161 6.3.4 Overall Participant Map ...... 163 VII. THE PREDICTIBLE INSTABILITY OF GENRE ...... 167

7.1 In what ways do participants describe the platform environment and expectations for that environment? ...... 168 7.2 In what ways do participants’ actions align with their expectations for the platform and its environment? In what ways do they not align? ...... 171 7.3 In what ways do expectations and alignment vary by the classes of participants? ...... 173 7.4 Shifting Genre in the LocalWiki ...... 177 7.5 Governance and the LocalWiki ...... 179 7.6 Usability Studies and ANT as a Way to Investigate Platform Governance ...... 180 REFERENCES ...... 182

APPENDICES

A. OUTREACH SURVEY ...... 192

B. LOCALWIKI DENTON TEST PLAN ...... 194

C. INTERVIEW QUESTIONS ...... 198

vii Texas Tech University, Michael R. Trice, August 2019

ABSTRACT This dissertation seeks to examine the ways in which a single online platform might exist in multiple states based upon the perceptions and experience of participants acting within the platform. Specifically, the study asks whether participants’ conceptions of a platform allows for a single digital platform to exist in a state of divergent genres dependent upon the level of experience for those participants. For the purposes of this study, the experience levels explored are new participants, experienced participants, and participants with design knowledge, where design knowledge is defined as experience with PHP and MySQL. The study then maps participant networks to illuminate the agents and relationships they create. In addition, the study maps a network for the platform itself in an attempt to map the platform’s interaction with these participant classes.

The study interviewed 15 participants of varying experience (new, experienced, designer) and then recruited members from the classes to perform tasks within a specific platform, the LocalWiki Denton. The interviewed expectations where compared to the task experience of the participants. From there, a series of Actor Network Theory (ANT) maps were developed to illuminate how each class perceived the network and a series for how they experienced the network via the usability tasks. Comparisons between experience classes of participants and between expectation and experience were then performed to determine ways in which agency, purpose, and situational contingency reflect in the manner in which participants perceived and interacted with platform.

The study found that experienced participants make the best argument for a single genre that meets the expectations of the participant in both interview and performance within the platform studied. The study also reveals significant variance in how new and design participants describe a system versus how they interact with a system. The mapping of these networks exposed numerous reasons for this variance, including the inclusion of “instability actors” that persisted through perception and interaction for many participants. “Instability actors” here refers to agents that do not exist within the platform but whom a particular participant class bases its contingency viii Texas Tech University, Michael R. Trice, August 2019 plans upon in both perception and experience. In the study, the clearest example of this phenomena involved new participants predicting the existence of moderators within the wiki and making choices in task process based upon the existence of moderators even in the absence of evidence that such moderators existed. The study concludes by discussing how varying degrees of genre stability within a platform affects rising concerns in platform governance and moderation in online communities.

ix Texas Tech University, Michael R. Trice, August 2019

LIST OF TABLES

3.1. How participants were selected broken down by outreach method ...... 52 3.2. Size of each group and effectiveness for capturing major issues...... 60 3.3. Categories for the usability study and the metrics used to evaluate each category...... 64 4.1. New Participant Interview Themes ...... 73 4.2. Experienced Participant Interview Themes ...... 75 4.3. Design Participant Interview Themes ...... 77 5.1. Survey Demographics...... 89 5.2. Usability Test Demographics...... 90 5.3. Time-on-Task in Minutes for All Classifications...... 100 5.4. Rates of Task Failure, Success, and Success with Difficulty...... 101 5.5. SUS Scores for each participant...... 105 5.6. Search terms used in Task One...... 111 5.7. Participants who noted missing information in Task One...... 112 5.8. Participant Workflow for Task One...... 113 5.9. Errors in Task Two...... 114 5.10. Workflow for Task Two...... 115 5.11. Errors in Task Three...... 116 5.12. Workflow for Task Three...... 117 5.13. Workflow for Task Four...... 118 5.14. Features Used in Task Five...... 120 5.15. Workflow of Task 5...... 121 5.16. Features Edited During Task Six...... 123 5.17. Workflow for Task Six...... 124 6.1. Common Concepts for New Participants ...... 129 6.2. Common Concepts for New Participants II ...... 130 6.3. Common Concepts for Expert Participants I...... 134 6.4. Common Concepts for Expert Participants II...... 135 6.5. Common Concepts for Design Participants I...... 138 6.6. Common Concepts for Design Participants II...... 139 x Texas Tech University, Michael R. Trice, August 2019

LIST OF FIGURES

1.1. A screenshot of the original DavisWiki...... 4

1.2. A screenshot of LocalWiki’s LocalWiki Denton...... 5

4.1. Screenshot of Task 1. The highlighted text reads: “Open Mic Monday every week.” ...... 78

4.2. Screenshot of Task 2. The highlighted text indicates the table entry for date founded...... 79

4.3. Screenshot of Task 3. The link button can be seen as the first icon to the right of the styles dropdown menu...... 80

4.4. Screenshot of Task 4. The Square page is shown with List of Businesses header highlighted...... 81

4.5. Screenshot of Task 5. The figure shows the template options when creating a new page...... 83

4.6. Screenshot of Task 6. The highlighted text reads: “Don’t go hear just awful.” ...... 84

5.1. Screenshot of Task 1. The highlighted text reads: “Open Mic Monday every week.” ...... 93

5.2. Screenshot of Task 2. The highlighted text indicates the table entry for date founded...... 94

5.3. Screenshot of Task 3. The link button can be seen as the first icon to the right of the styles dropdown menu...... 95

5.4. Screenshot of Task 4. The Square page is shown with List of Businesses header highlighted...... 96

5.5. Screenshot of Task 5. The figure shows the template options when creating a new page...... 97

5.6. Screenshot of Task 6. The highlighted text reads: “Don’t go hear just awful.” ...... 98

5.7. Max Time between Inputs. This table shows the average maximum wait time per task...... 102

xi Texas Tech University, Michael R. Trice, August 2019

5.8. Total Mouse Clicks. This figure shows the average total mouse clicks across tasks...... 103

5.9. SUS Score. This graph charts SUS scores for all participants...... 104

5.10. Survey Results for Task 1. This displays average survey results for finding open mic night hours...... 106

5.11. Survey Results for Task 2. This displays average survey results for editing a table...... 107

5.12. Survey Results for Task 3. This displays average survey results for adding a link...... 107

5.13. Survey Results for Task 4. This displays average survey results for finding businesses on the Square...... 108

5.14. Survey Results for Task 5. This displays average survey results for creating a new page...... 109

5.15. Survey Results for Task 6. This displays average survey results for fixing a biased page...... 110

6.1. How New Participants Expected the Wiki to Look...... 157

6.2. How New Participants Experienced the Wiki...... 158

6.3. How Experienced Participants Perceived the Wiki...... 160

6.4. How Experienced Participants Experienced the Wiki...... 161

6.5. How Design Participants Perceived the Wiki...... 162

6.6. How Design Participants Experienced the Wiki...... 163

6.7. How Participants Experienced the Wiki...... 164

6.8. Instability agents in the wiki...... 165

xii Texas Tech University, Michael R. Trice, August 2019

CHAPTER 1

INTRODUCTION The role of platforms in digital texts has increasingly become a subject of focus over the last 15 years. While as recently as 2004, Jenkins and Thorburn made the argument that concerns of radicalization in online spaces were overblown due to the continuing primacy of broadcast information, it would take only four years for Jonathan Zittrain

(2008) to counter with his concept of netizenship. For Zittrain, ’s volunteer community spoke to a certain ideal blend of civic responsibility and non-governmental community governance that could inspire the future of the Internet away from corporate governed spaces like MySpace, Facebook, and Twitter. Similarly, Coleman and Blumler (2009) would make their first argument for a BBC-like approach of a public-funded, independent civic space, where citizens fees could create a non- governmental organization devoted to a transparent, accountable space for civic discussion online. Gillespie (2010) also challenges the nature of our concept of platform, arguing that the rise of social media platforms like Twitter, YouTube, and

Facebook gained enormous influence by subsuming and confusing what a platform means—whether it is digital, political, corporate, or civil. The widening debate about the conflation of the civil and corporate in online platforms not only continues, but has widened in scope and intensity (Potts, 2013; Tufekci, 2016, Gillespie, 2018; Faris et al., 2017; Edwards & Gelms, 2018, Sano-Franchini, 2018).

The field of Technical Communication has also focused on the rhetorical issues of platforms. The focus for TC has included both the public spaces discussed

1 Texas Tech University, Michael R. Trice, August 2019 above (Jones, 2009; Potts, 2013; Ferro & Zachry, 2016; Vie, 2008) as well as business and local organizational interests in how platforms invoke civics and phronesis within those organizations (Hart-Davidson et al., 2008; Spinuzzi, 2008). The role of ethics in moderation, organization, and participant interaction on these platforms and in

Technical Communication broadly has also become an area of increased interest (Haas

& Eble, 2018; Jones, 2017; Agboka, 2014; Rose & Walton, 2015), though other fields have also focused heavily upon this intersection of platform moderation and participant accountability (Phillips, 2015; Tufekci, 2016; Chess & Shaw, 2015;

Mortensen, 2016; Massanari, 2017). While I expand upon each these interests within this dissertation, it is clear that tensions around participants and platforms are immediate and complicated in nature. They arise across types of platforms and focus on a range issues, including civic participation, ethical moderation, community values, knowledge-making, and rhetorical interactions. In particular, these rhetorical interactions range from participant-platform, participant-participant, designer- participant, and designer-platform, some would even argue platform-platform (Rivers

& Söderlund, 2015).

In this dissertation I examine one instance of an online wiki in order to look at how participants shape moderation within that platform and how participants are shaped by the platform and perceptions of the platform. What makes the wiki selected of particular interest is that it lacks formal moderation beyond the innate constraints of the software itself. The lack of clear human oversight, rules, and guidance affords opportunity to see how participants envisioned and generated moderation and its ethic

2 Texas Tech University, Michael R. Trice, August 2019 without explicit instruction. In this case, the platform is a community wiki for a small college town that served as the initial testbed for a wiki system that would be further implemented across the globe. In the next section, I explain some of the history of the

Denton LocalWiki as well as the history of the LocalWiki platform itself.

1.1 From DavisWiki to LocalWiki to LocalWiki Denton

In 2004, a community wiki launched in Davis, California. The wiki, DavisWiki.org, was founded only three years after the launch of Wikipedia, and it would become the largest city-based wiki, earning local awards for service and generating more than

15,000 pages over its first ten years (LocalWiki, NA). The initial success of this project led its two founders to seek funding for the LocalWiki project. LocalWiki would come to serve as a platform for cities across the globe to create similar to that of the DavisWiki (see Figure 1.1), though the LocalWiki founders desired to offer a wiki platform designed from the ground up to increase usability and promote a friendly interface (McGann, 2010).

3 Texas Tech University, Michael R. Trice, August 2019

Figure 1.1. A screenshot of the original DavisWiki.

In 2011, LocalWiki’s first community went online in Denton, Texas under the direction of two local community members (Balderas, 2011). However, by 2012 both of these organizers left Denton for other opportunities, leaving the LocalWiki Denton

(see Figure 1.2) without any central guiding force beyond its current participants and those who might discover the site on their own. Such circumstances offered a fascinating opportunity to examine how participants within digital, civic platforms utilize these platforms without a formal moderating structure.

4 Texas Tech University, Michael R. Trice, August 2019

Figure 1.2. A screenshot of LocalWiki’s LocalWiki Denton.

LocalWiki launched promoting a specific vision for its community media project. The sites would allow local communities to organize knowledge on their own and decide what the parameters of that knowledge might be. Even though LocalWiki

Denton started with organized meetups for the initial seeding of the wiki, this structure would fall away as the site went live and organizers moved on from the college town.

The shift in offline organization left LocalWiki Denton with a flat structure with regard to its governance and organization, there were no organizers socially or within the platform planning meetings t shape the rules or culture of the site.

Wikipedia is largely a managed process tied to key principles and overseen by tier administers, an Arbitration Committee, and an elected Board of Directors; while the content is publicly generated and all who wish have a say in making policies, the governorship is not wholly flat (Wikipedia, NA). The same is not the case for

LocalWiki. While developers managed the local sites at a technical level, governance

5 Texas Tech University, Michael R. Trice, August 2019 and site content were left entirely within the hands of each community. Again, in the case of LocalWiki Denton, this meant no formal governance once the original organizers moved on.

1.2. Why a LocalWiki?

A primary goal of LocalWiki was to address gaps in Wikipedia as a knowledge base.

While Wikipedia contains millions of pages, the governance of the site sets key limits for the type of knowledge contained within the wiki. These constraints include, but are not limited to, the following “five pillars” (Wikipedia, NA):

• Wikipedia is an encyclopedia—Wikipedia exists within a precise and clearly

define genre space. Genre as first principle explicitly defines the purpose and

social context of the site. As discussed in Chapter 2, this clear purpose has also

reflected upon the genre expectations of other wikis.

• Wikipedia is written from a neutral point of view—neutrality is the primary

guiding style of Wikipedia. It drives its source requirements, notability

requirements, and much of the editorial debate on the site.

• Wikipedia is free content that anyone can use, edit, and distribute—the

openness of Wikipedia is another feature that has shaped public views of wikis.

• Wikipedia’s editor should treat each other with respect—the inclusion of

etiquette as a pillar speaks to why Zittarin viewed Wikipedians as one the first

fully formed netizenships.

6 Texas Tech University, Michael R. Trice, August 2019

• Wikipedia has no firm rules—Wikipedia has rules, but those rules and

guidelines are decided upon by the community. A strong bureaucracy guides

the shaping of these rules, though the site has nothing approaching a

Constitution save the five pillars.

In addition to the five pillars, Wikipedia relies upon a three-pronged content test: neutral point of view, verifiability, and no original research (Wikipedia, NA).

Neutral point of view is discussed above, but verifiability and no original research bear directly upon the purpose of LocalWiki as a response to Wikipedia. Verifiability means that the information is sourced to a reliable source, including mainstream newspapers, academic texts, magazines, and books from respected publishing houses.

No original research means that Wikipedia does not allow primary sources nor interpretations of primary sources except those from verifiable secondary sources.

These three content guidelines create a corollary concern referred to as notability on

Wikipedia—notability states that topic only warrants its own page if the topic can be sourced to verifiable and reliable secondary sources (Wikipedia, NA). Topics or information that cannot meet this standard must be excluded from Wikipedia.

These guidelines serve Wikipedia’s global encyclopedic mission relatively well, but they also represent significant barriers preventing a wide body of knowledge from entering the site. The possible types of knowledge excluded by these practices are far from incidental:

• Communities too small or poor to provide their own reliable sources in the

form of news coverage.

7 Texas Tech University, Michael R. Trice, August 2019

• First-hand accounts of any type, but particular of issues without global

notability.

• Reviews and opinions on any matter, including those of global importance.

That Wikipedia operates with such a strict rule set raises important questions about what it means to be a public wiki in an age dominated by Wikipedia and how participants of wikis that host different forms of knowledge operate within those wiki environments. How do these participants articulate their goals and concerns about civic wikis that are not Wikipedia? And how do those goals and concerns shape their user experience within the public wiki?

Such questions take on a greater importance because wikis function differently than most other social media. While Twitter, Facebook, Reddit, and Disqus serves as conversation spaces, the wiki is at its core a knowledge base meant, in one form or another, to acknowledge that information has transitioned into knowledge and to store it for categorical retrieval—categorical retrieval (topics of knowledge) being a purpose social media in general performs inefficiently versus temporal retrieval (timelines and events).

The dissertation explores and explains important differences between wikis and social media as social acts of communication, as well as why fundamental genre differences place a particular burden upon our need to more closely examine public wikis outside of Wikipedia in order to understand how smaller knowledge bases are formed, curated, and utilized by their participant communities. Beyond the exploration of public wikis, the dissertation explores a method for analyzing how participants

8 Texas Tech University, Michael R. Trice, August 2019 interact with online platforms by applying usability methods to mapping participant networks and comparing how different types of participants experience different networks within the same platform. The comparison of these maps will then be explored to determine how they might be used to better shape our understanding of platform, participant, and designer agency within online systems.

1.3 Participant Resistance

One theme explored within this study is that participants often bring a set of goals and methods to the development of many online systems distinct from what the designers intend. Rather than viewing these goals as something the platform must address, this study investigates ways in which participants resist the system to fit their own needs and how the platform resists and aids platform participants in doing so. The issue finds an acute setting in the LocalWiki Denton since the LocalWiki founders intended for their civic wikis to operate with offline organizational groups and leaders. However, all public systems face these issues of resistance from their participant communities.

Wikipedia frequently deals with parody editing (Chaurasia, 2017). Twitter’s troll problems are well known (Bahadur, 2015). Reddit experienced a user blackout over site changes in 2015 (Couts, 2015). Even 4chan’s founder, Christopher “moot” Poole, moved on when managing illicit conduct on the site, where nearly anything goes, became too much to manage (Kushner, 2015). Again and again, participant social resistance has driven the narrative around the social web.

Yet, participant resistance and hacks do not need to only be exclusively negative nor destructive in nature. The LocalWiki Denton wouldn’t exist without the

9 Texas Tech University, Michael R. Trice, August 2019 ability to somehow soldier on without visible governance. Thus, the need to be able to acknowledge the importance of participant design goals in these systems is vital to understanding how and when participant resistance should be acknowledged within the design of the system. Understanding such trends creates a greater level of audience awareness and potentially offers insights into how best to create content management systems.

1.4 Central Questions

As stated earlier, the dissertation seeks to examine ways in which interaction between participants and platform inform the experience of participants, particularly as that experience relates to governance and community perceptions of online content systems. When connected to a community ethos like Zittrain’s netizenship or even

Negroponte’s netiquette, Technical Communication and Rhetoric might well describe the intersection as phronesis. Thus, the dissertation looks at how participants and platforms construct a sense of wise behavior and community ethos in their interaction and resistance to one another. One might even call this a study in social usability—or the manner in which platforms allow participants to complete community rather than technical tasks and how platforms set expectations for those community tasks. To explore, the dissertation asks three questions:

1. In what ways do participants describe the platform environment and

expectations for that environment?

2. In what ways do participants’ actions align with their expectations for the

platform and its environment? In what ways do they not align?

10 Texas Tech University, Michael R. Trice, August 2019

3. In what ways do expectations and alignment vary by the classes of

participants?

To further inform these questions, the study examines participant performance in completing a set of tasks within the wiki across the three participant classes: new, experience, and design participants. The purpose of this distinction is to offer both a view of how platform knowledge shifts community expectations and phronesis and as a way to complicate the nature of what constitutes a participant within a platform.

While there are many important ways to explore the demographics of participants, experience with the tasks at hand in the community is one of the most foundational

(Hackos & Redish, 1998; Spinuzzi, 2003; Zhu et al., 2013). Thus, as a starting point, experience was selected for subcategorization in this dissertation.

From this set of questions and classification of participants, the study suggests a manner in which we can better understand how flat communities across content systems might be better explored. This might include any number of loosely aligned collectives found regularly on online platforms: customers, activists, fandoms, citizens, geographic communities, and so forth. How these audiences might create their own customs and ethics in response to a specific platform warrants closer attention by platform designers and those attempting to reach these groups, as well as the groups themselves. It might also potentially help explain how and why some of these groups act in aggressive or destructive ways. For example, do all social media mobs contain the same essentially understanding of their platform agency and network or do they react based upon customized expectations of how the platform operates and

11 Texas Tech University, Michael R. Trice, August 2019 the best way to operate within that platform? These further explorations are part of the dissertation hopes to open as paths of exploration for other researchers.

1.5 The Approach

The study walks through a four-pronged mixed methods approach to analyzing participant actions in a public wiki with the goal of defining how participants articulate their purpose and methods for using an open civic system as a genre and how those purposes and methods match the capabilities of the participants within the platform when measured for participant performance. The following section briefly outlines the approach’s review of the literature, process for interviews, evaluation of usability tasks, and creation of ANT maps.

The first step in this process is to outline what the field of Technical

Communication understands about wikis and participant performance in online platforms. As mentioned above, this revolves primarily around establishing a framework for defining the wiki as a platform and as genre within the well-established frameworks for discussing rhetorical genre in Technical Communication (Williams,

2003; Ball, 2012; Sherlock, 2009; Spinuzzi, 2004). Once this review of wiki as platform and genre is complete, elements of public participation (Potts, 2014; Ferro &

Zachry, 2014), and civic discourse (Castells, 2010; Coleman & Blumler, 2009) are layered upon this framework to provide a more robust theoretical lens by which to review participation within the civic wiki.

The second step consists of exploring interviews of new, experienced, and design-level (those with programing experience) participants about their expectations

12 Texas Tech University, Michael R. Trice, August 2019 and approaches within a civic wiki. The approach allowed participants to define the goals and methods by which they approached the wiki system as a vital manner to explore definition building within the community itself without biasing expectations

(King & Hoorocks, 2010). These interviews are then used to create six tasks, two from each participant group, to evaluate the effectiveness of participants in performing the tasks they identified as key to their use of the civic wiki.

The third stage is to evaluate participant performance based on those six tasks designed from the interview responses: two tasks designed from each group’s responses. The process includes established usability task analysis methods from

Dumas and Redish (1993), and from Hackos and Redish (1998). It also looks to build upon Usability Studies that have begun to ask about the role of community ethos in platform performance (Hart-Davidson et al.,2008) and those looking for a more robust, or ecological/systemic, view of participant usability issues within systems

(Still, 2010).

Finally, the dissertation takes the interview expectations and the performance data from the usability to tasks to map the perceived and performed experience of participants within the wiki platform. In this way, the dissertation builds upon past explorations of network mapping in Technical Communication (Potts & Jones, 2009;

Spinuzzi, Zachry, & Hart-Davidson, 2007; Frith, 2014). Specifically, the dissertation explores how actor-network mapping can illustrate the perceived network versus the performed network between groups of participants in order to determine how this

13 Texas Tech University, Michael R. Trice, August 2019 informs participant use of a platform and the platform’s reaction to this use by participants.

1.6 A Word on the Timing of Data Collection

Data collection for this dissertation occurred in the summer of 2013. Both interviews and usability task analysis occurred during that time frame. Clearly much has changed in the landscape of digital collaboration over that time. The literature review in chapter addresses many of these developments, including increased partisan activism in social media, the rise of bot network and hostile state actors, and growing concerns over moderation across social networks. That said, these issues are additive to the ecosystem described here, as Wikipedia and LocalWiki have both continued to grow since the study was done. Thus the study offers a reflective moment in time that can help bridge the period between the rise of wikis in the aughts to the modern digital collaboration ecosystem.

1.7 Overview of Chapters

In this section, I outline the chapters that constitute the remainder of the dissertation.

The chapters include: Literature Review, Methods, Interview Results, Usability

Testing Results, Analysis of Participant Perceptions and Behaviors, Discussion, and

Conclusions. The descriptions below serve to offer a brief summary of key themes in each section and how they serve the goals of the dissertation.

14 Texas Tech University, Michael R. Trice, August 2019

1.7.1 Chapter 2: Literature Review In Chapter 2, I explore relevant literature in Technical Communication, Composition and Rhetoric, and Media Studies as it relates to platform governance, the ethics of participant interaction with platforms, the legacy of online platform (and wiki evaluation in particular), and how a method might be developed to explore these issues that centers the participants while acknowledging the power and agency of the platform itself. The review serves to connect threads regarding community moderation, system usability, platforms as genre, and ethical governance as a cross- disciplinary issue that can inform important questions in Technical Communication and Rhetoric, especially with regard to how we consider platforms as genre and how participant interaction within platforms complicates these genres when we begin to ask what the social action performed by these participants might be separate from the designed intent of the platforms.

1.7.2 Chapter 3: Methods In Chapter 3, I explain the usability-focused mixed methods approach I take to examining the LocalWiki Denton. I explain the interview structure and the usability testing methods applied to the study. Additionally, the participant selection process and the specific usability tasks chosen are explored in detail.

In sum, 15 participants were selected for initial interviews. The participants represented 5 new participants, 5 experienced participants, and 5 designer participants.

From these 15 interviews, a list of expectations about platform function and purpose were derived. I used these expectations to design 6 tasks (two associated with each

15 Texas Tech University, Michael R. Trice, August 2019 type of participant). Then a second round of 15 participants (for a total of 30 participants) was selected to perform the usability tasks. These participants were again divided into the same three types. The data from the usability tasks became the performance data used to compare against participant expectation. Finally, maps were created of each participant types performance and perception for comparison.

1.7.3 Chapter 4: Interview Results In Chapter 4, I present the results of the interviews from all 15 participants. I then present the interview data by types of participants. The chapter also explains how these results helped inform the selection of usability tasks. The results are broken into four main categories: purpose, participant actions, problems, and platform functionality. These categories were derived from a specific series of questions for each type of participants that asked questions relevant to their experience with

LocalWiki Denton. For instance, new participants were asked more generalized questions about what they expect from a community wiki, while experienced participants were asked about their specific expectations of the LocalWiki Denton.

The chapter then relates how the interviews informed the design of the six tasks used in usability testing.

1.7.4 Chapter 5: Usability Testing Results In Chapter 5, I present the results of the six usability tasks. Results are presented across each task as well as results listed by the participant types: new, experienced, and designer. The chapter presents the results across a variety of usability metrics, both qualitative and quantitative. The results include participant satisfaction scores,

16 Texas Tech University, Michael R. Trice, August 2019 failure rates, errors, participant paths, and problem-solving techniques. Secondary metrics meant to inform these primary results are also included: number of mouse clicks and time between input.

1.7.5 Chapter 6: Analysis of Participant Perceptions and Behaviors In Chapter 6, I analyze the results from the previous two chapters in order to map both participant perceptions of their experience in the platform as well as map how the usability tasks demonstrate they performed their experience. Eight maps are created, two for each type of participant (expectation and experience) while a final pair of maps is created across the joint experience of all participants. These maps illustrate the extent to which participants perception and experience varied, and what specific elements resulted in the variation. For instance, participants varied in their perceived expectation of whether moderators and/or trolls might exist within their anticipated network. Thus, the maps helped demonstrate how a participant type’s belief in a moderator influenced how they approached tasks within the wiki.

1.7.6 Chapter 7: Discussion In Chapter 7, I explore the application of the maps in Chapter 6 and what they offer for understanding specific issue in platform moderation and participation. In particular, I examine how they might assist in viewing the interactions between participants and platforms as rhetorically constructed and non-constant even when design and functionality might be seen as constant from the viewpoint of designers. I also explore “instability actors” revealed by the maps, such as perceptions of non- existent moderators and trolls. These instability actors highlight how participants

17 Texas Tech University, Michael R. Trice, August 2019 envision agency and audience beyond the physical reality of the platform, which in turn demonstrates how types of participants build phronesis based as much upon social expectations as platform functionality. Finally, I explore limitations of the approach employed in this study and suggestions for future research.

18 Texas Tech University, Michael R. Trice, August 2019

CHAPTER 2

REVIEW OF THE LITERATURE In this chapter, I review pertinent concepts in Communication Studies and Technical

Communication as they relate to the interaction of platform governance, the history of wikis, how participant interaction informs platform function as genre, and how usability and network mapping as methods might inform these areas. The goal of this review is to explain the larger context for this study and begin to position the importance of evaluating a public knowledge base, such as LocalWiki, as a means to wrestle with the larger issues of platform moderation and governance in the public arena. Specifically, the chapter acknowledges the vast previous research in wikis and digital platforms, while highlighting the need or an understanding of digital governance within collaborative media that accounts for the divergent genre/purpose expectations of all participants. It is advancing how we see these divergent purposes manifest that drives the dissertation. To accomplish this contextualization, the chapter considers the following areas:

• Platform Governance and Participatory Culture

• Wikis as a Digital Collaborative Platform

• Platform and Participants as Genre Actors

• Usability, Participant Experience, and Evolving Agencies as Theory and

Method

19 Texas Tech University, Michael R. Trice, August 2019

The progression of topics offers an overview of the key theories that relate to issues surrounding knowledge-making and platform governance while also placing the particular platform observed at the center of the discussion. The overview also serves to highlight the limit framing around governance and wikis as public platforms after the rise of social media. First, I explore the complicated space around governance and community ethics in online platforms. Then I explore the literature explicitly defining the wiki as a platform with a focus on how Technical Communication has viewed the wiki as a genre, including its platform functionality, participants, and purpose. Once the literature explaining the complicated network of actors involved in wikis is established, I review how genres as networks have been examined previously in

Technical Communication. Finally, a explore some groundwork principles in practical methodology around usability and genre with the goal of highlighting how these methods can better inform platform governance by revealing what participants perceive within a platform versus what exists in within the platform and how that informs their behavior.

2.1 Platform Governance and Participatory Culture

Over 15 years ago, Henry Jenkins and David Thorburn (2003) opened their edited collection on Democracy and New Media by noting that predictions of a digital revolution seemed stymied by the steel-like grasp mass media in the form of television and film held upon audiences (p. 13). By contrast, while they acknowledged those on the far political left and right often discussed a digital revolution, wider audience for such a radical shift did not exist online. Six years later, Jonathan Zittrain (2008) would

20 Texas Tech University, Michael R. Trice, August 2019 point to the incredible success of Wikipedia, launched in 2001, as a possible representation for an evolving since of participatory digital civic identity, a netizenship. This concept of netizen, Zittrain explained, derived from how “tools and conventions facilitate a notion of ‘netizenship:’ belonging to an Internet project that includes other people, rather than relating to the Internet as a deterministic information location and transmission too or as a cash-and-carry service offered by a separate vendor responsible for its content” (p. 142). The importance of Zittrain’s emphasis upon people, conventions, and tools as equal parts of this netizenship is something that might draw some familiarity with even earlier descriptions of an arising internet ethic.

A decade earlier, in 1995, Negroponte highlighted the developing netiquette of

Internet communication in juxtaposition with the developing “street smarts” of those learning by play on the internet. Negroponte’s view of street smart internet participants had something of an aspirational tint to it in 1995—and that is in keeping with Zittrain’s admiration for Wikipedia—seeing this street smarts as a generational marker between those who had grown up experiencing the Internet and those who had come by it as a tool of labor and adulthood. Currently the scholarship covering online community street smarts has taken a distinctly critical turn in the works of scholars like Whitney Philips (2015), Zeynep Tufekci (2017), Joseph Reagle (2015), Siva

Vaidhyanathan (2018), and Tarleton Gillespie (2018), who saw the rules arising in network communities to be less the rigorous mission-driven values of Wikipedia and more the attention driven values of online activists and trolling communities.

21 Texas Tech University, Michael R. Trice, August 2019

For example, after initially supportive work on the role of social media in supporting activists during the Arab Spring (2009), Tufekci became more critical by

2017 in exploring how sites like Google and Facebook had taken advantage of network externalities to not only acquire a staggering 1.5 billions participants, easily surpassing any single traditional mass media provider, but also posing significant threats to the ability of even nation-states to control information (p. 138). While

Tufekci framed her analysis in relationship to the Arab Spring, the same lack of governmental oversight that fed revolution in Egypt also contributed to Russian ability to organize protests and counter-protests within the United States via Facebook in

2016 (Albright, 2017). Gillespie touched on similar concerns regarding the confusion between social media platform power and traditional political platforms (2009). In his exploration of platform rhetorics, Gillespie questioned the effect of confusing the variant meanings of platform with one another. The confusion of a civic platform and corporate platform drew particular criticism. Gillespie’s concerns echoed some of

Zittrain’s concerns about Facebook and Twitter in how the social media platforms positioned a closed corporate ethos as the same as a free public square ethos. All three scholars noted that tensions between how these corporate platforms needed to exercise control and satisfy investor concerns ran in opposition to the typical needs of democratic open expression, transparency, and accountability. Zittrain (2008) in particular questioned what type of citizenship would be raised by participants operating within these closed platforms and the loss of innovation outside their constraints.

22 Texas Tech University, Michael R. Trice, August 2019

Yet, outside these closed platforms, scholars such as Whitney Philips and

Adrienne Massanari raised concerns about the nature of Negroponte’s street-smart education occurring in the “open” internet. Chan1 culture, in particular, raised red flags about the nature of some variants of the Internet’s emerging ethos. Philips (2015) noted that trolling culture had arisen in a generative manner out of a “lulz” culture (p.

30). Philips noted that the trolling ethos of those who frequented Chan boards not only sought to cause extreme harm for laughs but also fed upon the rage caused by their harassment to feed the community and keep it engaged. Massanari (2016) connected this behavior explicitly to Chan harassment campaigns like GamerGate, where online deeply personal and intimate rumors were used to harass a number of female game designers, journalists, and academics into quitting their professions. These concerns about trolling across the Internet have significantly increased over the last five years due to campaigns like GamerGate, QAnon, and the rise of the Alt-Right’s online movement (Gillespie, 2018; Jhaver, Chan, & Bruckman, 2017; Mortensen, 2016;

Reagle, 2015).

As these scholars demonstrate, from Wikipedians to Twitter activists to

Channers, the variety of ethics arising from the street smarts of the Internet has become diverse. The diversity described also seems entwined with not only the platform, but also the purpose of the platform for particular sets of participants. As

1 Chan culture refers to a set of anonymous message boards (such as 4chan and 8chan) where anonymity is strictly enforced and hyper aggressive behavior is a socially enforced norm. The spaces are widely known for not only their extreme vulgarity but also racism and organized harassment campaigns on other platforms.

23 Texas Tech University, Michael R. Trice, August 2019

Zittrain acknowledged, this engagement has been a confluence of convention, tools, and human participants. The arising variety of Internet ethics points toward the role of what Carolyn Miller (1984) called genre as social action, and it is this confluence of convention, platform, and participants that often interests those in the fields of

Technical Communication and Rhetoric. While many scholars in Communication

Studies have focused upon governmental and policy issues related to these online platforms, Technical Communication has had similar discussions about the intersections of people, purpose, and platform technology in content management

(Hart-Davidson et al.,2008; Andersen, 2014; Hackos, 2016), social media (Balzhiser et al., 2011; Potts, 2013; Vie, 2008; Edwards & Gelms, 2018), and wikis (Jones, 2009;

Barton & Cummings, 2008; Mader, 2009; Manion & Selfe, 2012). It is important to briefly connect these areas of study in TCR to the wider concern around platform ethics and netizenship.

Rude (2009) positioned a central question in Technical Communication: “How do texts (print, digital, multimedia; visual, verbal) and related communication practices mediate knowledge, values, and action in a variety of social and professional contexts?” The question connects to a number of the issues driving the discussion about platform governance because it asks how these platforms come to express the values and activities of their participants. More, it adds the role of mediated knowledge to the discussion of governance in these texts. Potts (2013) takes the knowledge question as means to investigate ways in which information in social networks becomes knowledge. In discussing how people respond to disasters, for

24 Texas Tech University, Michael R. Trice, August 2019 instance, Potts highlights the role of human and non-human actors to form expansive networks capable of verifying and vetting information into knowledge (p. 96). While the networks Potts discusses tend to be larger and flatter, Hart-Davidson et al. (2008) discuss a similar process in terms of phronesis, arguing that the workflows of an organization contain embedded values for that organization.

Even though Hart-Davidson et al. apply this standard to content management workflows, others have pushed the nature of genre as a way to evaluate governance.

Sherlock (2009) identified the activity of player grouping within online gaming as a means for players to organize and integrate a network of genres that serve a central activity—Sherlock explores grouping as a type of genre ecology, a concept explored more in section 2.3 along with activity theory and actor network theory.

Also examining the role of user-generated networks, Jones (2014) has explore how hashtags allow participants to circumvent the typical functionality of social networks like Twitter to create embedded mini-networks with new purposes and activities associated within these hashtag networks. Notably, Jones stated that hashtag networks have a tendency to speak to one another and serve as a method of information switching from one network to another. Jones’ work pre-dates more recent examples of more tightly organized hashtag communities like

#BlackLivesMatter and #GamerGate, serving to establish the history of Technical

Communication in asking how participants work with and against platform functionality to fulfill Rude’s focus on mediated knowledge, value, and action. In fact,

Chess and Shaw (2016) documented how the harassment campaign that formed the

25 Texas Tech University, Michael R. Trice, August 2019

#GamerGate hashtag used academic conference hashtags as a way to target academics and push anti-feminist messaging further stating “Anyone who had ever written anything that might be accused of being feminist, Marxist, or really anything less that

“Science” became a target of GamerGater ire.” (p.27) In short, Technical

Communication has often explored Rude’s question as one that either informs governance (such as grouping or hashtag switching) or one that promotes values that govern a community (including phronesis and knowledge-vetting).

The wiki platform has similar governance issues to those of other digital platforms. Barton and Hieman (2012), in a study on wiki use in the Technical

Communication classroom, highlight that while wikis enhance collaboration, there exists no proof that they flatten hierarchies—in other words, they rely upon leaders nd structures for organizations and governance. Cummings and Barton (2008) also note in their history of the wiki as a platform that the defining traits of wikis involve their purpose, relative openness to participation, and their methods of creating knowledge.

Again, the descriptions harken back to Rude’s emphasis on mediation (method) and action (purpose), though Barton and Cummings highlight audience explicitly rather than the context of the social action.

In short, the literature points toward a growing concern about content governance, though not a new concern. The intensity of these concerns most manifest themselves on digital platforms other than wikis, but research into wikis suggests that too require organizing structures. This requirement is part of what makes LocalWiki’s lack of structure interesting, as it offers a chance to see how different groups of

26 Texas Tech University, Michael R. Trice, August 2019 participants imagine and enact governance within the platform. For Technical

Communication, the way participants enact governance can inform how the medium transforms information into knowledge.

2.2 Technical Communication and the Wiki

In this section I review the literature surrounding wikis with an eye toward the literature addressing genre and generative production in wikis—referring back to

Zittrain’s framing of tools, conventions, and participants. I also review the role

Technical Communication and Rhetoric has played in this evolving discussion.

A vast body of work has addressed the wiki in business (Majchrzak et al.,

2013; Kankanhalli et al., 2005; Ferro & Zachry, 2014; McDanial & Daer, 2016), the classroom (Barton& Cumming, 2008; Walsh, 2010; Manion & Selfe, 2012; Larusson

& Alterman, 2009), and on Wikipedia (Decarie, 2012; Coles & West, 2016; Jadin et al., 2013). Beyond Wikipedia, the most commonly examined style of non-classroom, non-business wiki tends to be gaming (Mason, 2013; Sherlock, 2009). Thus, four broad classifications of wikis exist in the research: corporate, classroom, games/fandom, and Wikipedia/encyclopedic.

Attempts at more specific genre definitions of the wiki exist, but they have remained somewhat rare given the focus upon the wiki as an object of study in classroom composition within Technical Communication. Poole and Grudin (2010) identified three genres of corporate enterprise wikis: single contributor, group or project, and company-wide (pedia). The genre taxonomy of Poole and Grudin focused primarily on who used the wiki (contributor as audience) rather than its stated purpose,

27 Texas Tech University, Michael R. Trice, August 2019 though a variety of purposes were recorded. Ferro and Zachry (2014) offered a taxonomy specifically in response to Poole and Grudin that moved away from textual genres toward what Ferro and Zachry labeled service genres. In the case of service genres, Ferro and Zachry grouped all wikis together as spaces “that can be read by individuals and edited/supplemented collaboratively by contributors” (p. 11). The broad definition for a wiki as service fits generally with definitions provided by Mader

(2009) and Barton & Cummings (2009). However, Barton and Heiman (2012) expand the service and text definition of wikis by reframing wikis as communities within workspaces, thereby integrating the tool, the participants, and what the authors describe as “a situated civic discourse” (p. 51). It is worth pointing how this again connects to a concept similar to Zittrain’s netizenship concern with tools, conventions, and participants.

Barton and Heiman’s discussion of a wiki as a situated civic discourse echoes

Spinuzzi’s (2003) call for a similar space within workplace culture to promote the deliberative action of workers. While Barton and Heiman do engage with Spinuzzi directly in their writing, it is at the intersection of distributed rather than deliberative work; however, the idea of civic deliberation as a key toward the deeper social activity behind wikis is worth keeping in mind. In fact, a need to define the specific social action behind the wikis is a central piece of what is missing in the understanding of the wiki as genre. One theme that can be drawn from this literature is that wiki exigence is frequently defined as civic, collaborative, and/or deliberative. It is a starting point for

28 Texas Tech University, Michael R. Trice, August 2019 understanding the purpose and space of wikis as a tool with a nexus of purpose and audience within a social community.

As mentioned earlier, in explaining how wikis might be best adapted to the classroom, Barton and Heiman note that wiki research has not demonstrated that wikis flatten hierarchy. In fact, wikis more often are closely defined by the organizational structure within which they exist. The ’s exhaustive community rules shape Wikipedia and its governing principles. Mader (2009) used structural organizational differences in audience and purpose to separate corporate wikis from

Wikipedia. Barton and Heiman follow this line of thought to suggest Technical

Communication classes should use wikis’ “organizationally situated civic discourse”

(p. 51). Sherlock’s examination of grouping as organizing principle also addressed the importance of wikis within gamer’s genre ecologies. The wiki has seen similar progression within Technical Communication, as overviewed earlier. In fact, Sherlock

(2009) framed his examination of the WoWWiki as related to Spinuzzi’s examination of activity system breakdown and how workers/participants respond to those breakdowns. For Sherlock, part of the WoWWiki’s exigence rested in how it allowed players to innovate solutions in response to creating “habits, knowledge, and genres”

(p.277). While Sherlock focused on a wider ecosystem than one wiki, he classified the wiki genre as one defined by player innovation as workaround in the face of an incomplete game experience. Thus, how a wiki is governed, how its civic nature is organized, is key to understanding the specific communicative act the wiki performs.

Or, in other words, researchers have regularly returned to the priorities of organizing

29 Texas Tech University, Michael R. Trice, August 2019 values within wikis as a means to explain its genre purpose and the expected conventions of its participants.

To more thoroughly grapple with what it means for a wiki to be driven by its participants’ conventions and values, an understanding of what constitutes audience within a wiki must also be reached. Many wiki studies define the wiki as a collaborative space (Mader, 2009; Barton & Heiman, 2012; Manion & Selfe, 2012;

Clark & Stewart, 2010). The wiki as collaboration has been discussed as peer assessment in classrooms (Manion & Selfe, 2012), business project management

(Clark & Stewart, 2010), hybrids of classroom and project management (Walsh,

2010), and other forms of public, civic collaboration (Ferro & Zachry, 2014). Such focus upon the collaborative functionality of wikis often leaves the primary use of wiki as knowledgebase less considered. While studies have shown participants are more likely to gather information from a wiki than contribute to it (Ferro & Zachry,

2014), the collaborative functionality often remains the focus of wiki studies in composition and Technical Communication. The emphasis appears vital to distinguish the utility of wikis in composition and writing studies, but it also might obfuscate the more common role of the genre as a knowledge creation tool, specifically a means to define concepts and gather facts around topics.

A great deal has been written about the nature of “lurking” in participatory communities (Van Mierlo, 2014; Sun, Rau, & Ma, 2014; Muller, 2012; Collins &

Nerlich, 2015). In fact, what was once largely an anecdotal law of participation regarding the 90:9:1 rule of online communities has seen increased empirical support

30 Texas Tech University, Michael R. Trice, August 2019

(Van Mierlo, 2014). The general rule of digital collaboration states that 1% of participants provide content, 9% edit/manage content, and 90% read content. In multiple open health networks, “superusers” were found to contribute 74% of the content, contributors 24%, and lurkers 1%. Studies of hashtag activism have found similar breakdowns between superusers, contributors, and lurkers (Trice, 2015). Such dynamics in open collaborative spaces can be lost in classroom and business studies where governance of the group shifts the dynamic expected in the public square.

However, while “lurking” might not be seen as acceptable in a classroom or workplace, such behavior is not inherently wrong and should not be dismissed when it occurs in public spaces.

Lurker behavior thus invites three lessons in considering public civic spaces:

• they have decidedly different social dynamics for participants than classroom

and corporate digital collaboration spaces;

• they have a silent majority whose main function is consumption, not

contribution; and

• they have diverse reasons for choosing consumption over contribution that

may not even relate in any way to the usability of the collaborative

environment.

These points highlight that the participations within a public wiki is a varied experience constituted by more than one audience and the public wiki fulfills different purposes for lurkers and contributors.

31 Texas Tech University, Michael R. Trice, August 2019

In summation, the literature on wikis in has highlighted four key categories: classroom, business, games/fandom, and Wikipedia. The field has described wikis as textual genres, service genres, and civic genres where the emphasis upon participation versus purpose has frequently been the dividing line between definitions. Finally, the field has paid close attention to how the nature of participants’ goals influences governance and the distinct roles of contributors versus lurkers within wikis. The complicated network of interactions between participants, conventions, and purpose helps explain why the wiki has offered a compelling space for examination within the field, particularly as it relates to genre. For this reason, it is worth taking a step back into theory to see how such a complicated network of actors and outcomes can be evaluated within the field.

2.3 Platform and Participant as Genre Actors

Technical Communication in the late 1990s formed an important nexus between genres and user-centered design that continues to inform both the methods and methodologies of the current field. Carolyn Miller’s (1984) views on genre social action began this process by pushing genres from function to social context, placing more weight upon the social exigency that gave rise to a genre. Translation of

Bakhtin’s (1986) genre theories enriched this tradition by providing means to discuss the dialogic give and take between audience and speaker as utterances in negotiating the social acts surrounding genres. Shortly after these genre theories took root in composition and Technical Communication, so too did the rise of user-centered methods in Technical Communication research related to usability (Dumans & Redish,

32 Texas Tech University, Michael R. Trice, August 2019

1993; Hackos & Redish, 1998). These once parallel but increasingly intersecting trends of genre theories and user-centered methods gave rise to the many methodologies from Spinuzzi’s workplace activity systems (2003) to Pott’s social media ANT maps (2011) to Hart-Davidson et al.’s (2007) argument that Technical

Communication can be seen as a phronesis in its need to management the organizational principles supported by content management as an institutional social act. Thus, substantial portions of the modern methodology of Technical

Communication originates in how various genre theories are reconciled against user- centered and usability-based methods.

Spinuzzi (2012) articulates activity theory as a means to understand inter- organizational collaborations, highlighting the value of understanding the “bounded hubs” in which these collaborations occur. Few metaphors better fit the collaborative production of wikis than bounded hubs of inter-organizational collaboration, though

Spinuzzi uses the approach to discuss physical co-locations within workplace environments. Wikis are tied directly to activity theory by Walsh (2010), who uses AT to discuss actions of students, instructors, and clients to negotiate outcomes. Walsh highlights the value of this lens to label types of outcomes within the service course classroom. The value of utilizing activity theory is that activity is occurring and can be clearly seen and linked to outcomes of collaboration. For Spinuzzi (2013), activity occurs at three levels: macro (culture), meso (goal), and (micro) habit. He additionally clarifies these tiers as why, what, and how. The macro level operates at the value or cultural level, the meso is the human level of what is happening, and the micro level

33 Texas Tech University, Michael R. Trice, August 2019 addresses unconscious actions or habits that Spinuzzi claims explains how things are done. Sherlock (2009) notes that Spinuzzi’s tiered activity system attempts to account for genre at all three stages highlighting that genre is not static but recognizable (as per

Schryer’s stabile for now view), and Spinuzzi connects his sociocultural approach strongly with Bakhtinian genre theory and the impact of culture on establishing genres. Most importantly, Spinuzzi (2003; 2013) writes that genre activities must be traced at all three levels for a complete understanding of the genre ecology at play within an organization.

Activity Theory is not the only method of evaluating complex genre systems.

Potts and Jones (2011) utilized both AT and Actor Network Theory (ANT) in analyzing Twitter activity. More specifically, Potts and Jones applied ANT to map components of Twitter and two of its support tools: TweetDeck and Brizzly. The researchers then used what Spinuzzi refers to as third generation AT to map relationships as either actions or operations. The incorporation of AT theory’s view of the active action and the passive operation offers an interesting rhetorical value in examining the structure of a system because it highlights the attention paid to particular functions, granting what Lanham (2006) introduced as at/through mechanism for discussing that part of rhetorical engagement that is noticed and that which we read through. The rhetorical blending of Lanham with ANT and AT theories in such a manner offers significant opportunity to afford genre features within these systems across types of audiences.

34 Texas Tech University, Michael R. Trice, August 2019

Potts (2013), however, has also used ANT to map knowledge networks, specifically in her disaster response work. ANT, as does the work of Bruno Latour, focuses on mapping networks of human and non-human actors to illustrate how groups form and shift in association with one another to illuminate actants—or the networks that perform an activity (Latour, 2005). That ANT focuses at the level of action rather than larger societal concerns is a recognized and accepted critique of the approach (Law & Hassard, 1999). Yet, Latour’s own connection of the flexibility of actants with the language of narrative and literature has kept the approach appealing, especially in connection to genre studies. That multiple names can overlap for an actor to illustrate the range of ways a thing can be done allows us to map the interconnecting levels at which a thing can be done: tool, individual, organizational.

As Spinuzzi (2015) notes, the symmetry of treating all actors as equal does not diminish our ability to revisit the network with other structures. Thus, while ANT allows us to see the equal role a reply button in a wiki with the participant entering the reply with the cultural that expresses the conventions of why a reply is needed, it does not prevent us revisiting each of the layers and actors in turn to review what has transpired.

AT and ANT approaches enable us to review workflows and networks in useful ways. They highlight relationships between platform and participant agency while illuminating the multitude of levels at which actors function and overlap.

Additionally, by equating the value of actors, they allow us a means to visit actors we might otherwise dismiss out of hand. This ability to not dismiss actors easily is

35 Texas Tech University, Michael R. Trice, August 2019 especially valuable when attempting to map the interactions of flat organization where participants might hold unexpected ideas about the actors and activities of their network. ANT in particular helps by showing what a localized space looks like for a specific purpose. This localization offers an opportunity to take a big idea like “digital platform moderation” and see what it looks like in a narrow case removed from global and cultural weights. By removing these global expectations, we empower the local participants with network to own their own space and potentially make visible viewpoints that might be dismissed.

2.4 Toward a Method of Tools, Convention, and People

Ultimately, what is relevant to this dissertation is a method capable of identifying and evaluating the effects of convention, tools, and people within the online knowledge spaces, particularly as a means to explore ways in which governance arises from the interaction of participants and platforms. Usability Studies offers a method to meet these goals by encouraging a close examination of what participants see, say, and do

(Still, 2010) within a platform as a way to evaluate the platform’s attendance to participant purpose. In addition, usability offers a method to note how participants resist and workaround the constraints of the platform (Spinuzzi, 2003) and how participants incorporate community virtue into their application of a platform (Hart-

Davidson et al., 2008). While this material is explained in far more depth in the next chapter, here I briefly review the three central methods applied in the dissertation: interviews, Usability task analysis, and ANT mapping.

36 Texas Tech University, Michael R. Trice, August 2019

2.4.1 Interviews Interviews constitute a vital element of Usability Studies (Barnum, 2002; Dumas &

Redish, 1993; Hackos & Redish, 1998). Since the earliest days of usability as a method in Technical Communication, interviews have offered a process to understand ways in which participants see their context and describe the actions they take to perform in that context (Still, 2009; Spinuzzi, 2003). Additionally, interviews have been used regularly to understand how participants within a platform explain the values of their platform (Jhaver et al., 2018) and how participants define the nature of their collaboration (Spinuzzi, 2012). Beyond the field of Usability Studies, interviews have been recognized as a way to empower the subjects of a study by allowing them to self-define their context, needs, and goals (Foddy, 1993; King & Horrocks, 2010). It is the empowering nature of interviews as a tool for allowing participants to self-define their purpose, conventions, and context that make the interview process worth highlighting as separate from Usability Studies alone.

2.4.2 Usability Usability as a methodology is well-established within the field of Technical

Communication (Still, 2009; Spinuzzi, 2003; Hart-Davidson et al., 2008; Hackos &

Redish, 1998; Redish, 2010; Rivers & Soderlund, 2016). The method has been applied to workspaces, content management, social media, and a host of other contexts. In this section, I review some of the key practical concerns related to the method.

Usability Studies typically involves a series of tasks performed by participants to test that a system’s workflows are useable by participants as intended (Hackos &

37 Texas Tech University, Michael R. Trice, August 2019

Redish, 1998). Participants can (and do) bring their own goals to a system, but usability testing typically favors the designers’ intent (Barnum, 2002). The favoring of the designer over the participant is one reason activity systems (Spinuzzi, 2003) and

ANT mapping (Potts, 2013) have been used to illustrate participant behavior, resistance, and manipulation within organizations and networks.

In addition to participant performance within a system, Usability Studies can be used to explore participant satisfaction via System Usability Scale (SUS). Debate exists about what SUS measures when it comes to participant responses. SUS scores most reliably measure perceptions of usability (satisfaction) rather than identifying causes of usability issues (Lewis & Sauro, 2009). In fact, it is often only in exceptional variance from the norm that SUS can offer firm readings on participant concerns.

(Sauro, 2011). Participants also tend to inflate scores in satisfaction in post-test scenarios as a people-pleasing action (Barnum & Palmer, 2011), though comparative

SUS scores remain valid. While some studies have questioned the extent of SUS inflation (Kortum & Bangor, 2013), most acknowledge that the validity of SUS rests firmly in its ability to offer a “quick and dirty” (Brooke, 1996) evaluation across a base of participants or across websites (Barnum & Palmer, 2011; Berkman &

Karahoca, 2016)—or even divergent delivery systems for the same goal, such as voice versus text (Brooke, 1996).

Another debate within Usability Studies is the number of participants for a valid study. Nielson (2000) holds that five participants is sufficient to find most issues within a system, though Nielson also clarifies that the use of five participants is meant

38 Texas Tech University, Michael R. Trice, August 2019 to be iterative over time. Faulkner (2003) established that 15 participants was sufficient to capture 97% of issues in a single pass (as a mean with lows of 90% captured), though a five-participant group could capture 80% of issues in a single pass

(with a low group of 55% of issues found). Six and Macefield (2016) highlight that larger groups are preferred for statistical and quantitative comparisons, but the smaller

8-12 range is sufficient for finding issues the majority of qualitative issues. Indeed, qualitative studies in eye-tracking might use as many as 27-34 participants to verify meaningful scan paths (Eraslan et al., 2016). However, 5 and 15 participants remains acceptable for more qualitative approaches around discovery of issues (Still & Koeber,

2010; Cooke, 2010). The ability to map issues in platform with a relatively small sample size offers one of Usability Studies major appeals.

The field of Usability Studies offers a method to verify participant perception and goals versus actual performance within a platform. The task analysis creates a check on the interviews of the participants to see how they fulfill their stated goals and express their stated values within a non-speculative environment.

2.4.3 Mapping The field of Technical Communication offers a wide variety of methods for mapping networks. As previously discussed, ANT maps (Potts, 2013; Read & Swarts, 2015) offer a means to describe the network of actors, human and non-human, to assist in completing a particular activity. Frith (2014) has discussed the use of social network analysis as means to describe nodes and their weak and strong links within social media and texts. The use of network analysis (Read & Swarts, 2015) can also be used

39 Texas Tech University, Michael R. Trice, August 2019 to discuss how stable actors coordinate for a predicted outcome. Usability Studies also offers the means to map and visualize activities in more task-centered methods, such as workflows and heat maps.

Read and Swarts (2015) highlight ways in which ANT affords opportunity to operate by following an activity as it transforms across contexts allows for considerable flexibility, though they additionally note that network analysis allows for a narrower scope that can aid in understanding how transformations occur. Essentially,

ANT allows us to visualize networked actors in a way that reveals relationship contingency that might be otherwise overlooked. Law and Hasard (1999) described this as a joint dissatisfaction with the way site-based analysis overlooked social construction, but the evaluation of norms often seemed too abstracted to properly analyze local occurrences. Thus, the network creates an activity outside or between either context within ANT. For Potts (2013), this presents an opportunity to explore the process of knowledge-making.

Specifically, drawing upon Calhoun, Potts (2009) argues that ANT offers a means to visualize the work of networks in moving from data to information to knowledge. The transformations that concern Potts are about the manner of knowledge creation, whereas Read and Swarts (2015) focus upon ways in which ANT exposes functions of texts, knowledge work, and the relative instability of the environments of knowledge work, exposing how location and its relationships influence the development of a project.

40 Texas Tech University, Michael R. Trice, August 2019

Anders Blok (2010) explains the value of ANT mapping as a means to illustrate networks of actors that illustrate the tensions between the global and the local. He highlights how ANT “shares the Foucauldian concern with maps as expressive of power relations, [but] the two theories part ways when it comes to

`totalizing' power.” (p. 899) ANT does not assume any overall creator—or architect— per Blok, it describes what a network looks like a moment as a matter of actors working together to perform an activity. In this way, ANT pushes back against ideas of stability that have informed genre theory under Miller and Schryer for some time.

Blok goes to explain that this tension between the local and global suggests that multiple “globalities exist at one time and negating hierarchy. The same spatial concern Blok discusses also applies to other hierarchies. Like the Local-Global tension, ANT mapping can recognize the New-Experienced-Design hierarchy as more a multiverse of potential networks. It can then do the same for the imagined- experienced network. Tommaso Venturini (2009), in fact, emphasizes that ANT researchers should follow the actors, then the relationships, then the “cosmos”. For

Venturini, ANT does offer a sense of stability in this “cosmos” phase as he claims all actors seek stable networks. Stability then arises in the specific set of actors achieving a network, thus suggesting that perhaps there is stability for set of actors that create new, experienced, and design networks. This stability, when applied to Technical

Communication, Rhetoric, and Composition might indicate that a wiki is not one genre, but a separate stable genre dependent upon the actors engaged in its activity. If

41 Texas Tech University, Michael R. Trice, August 2019 those actors form a stable enough network, then they may meet Miller and Schyer’s definition of a reliable genre.

By offering a means to map actors for all of these networks as equal representations of an activity, such as contributing and building knowledge within a

LocalWiki, ANT mapping opens the possibility that the imagined wiki genre of the new participant exists at the same time as the intended experience of designer. What

ANT mapping cannot do is evaluate the outcomes of these networks, nor say that an imagined new participant network offers the same level of performance as the design participants network. All it can do is expose the agents and relationships that constitute one or the other.

What these mapping opportunities offer is a method to approach social action and rhetorical opportunities of communication in a way that includes participants, platforms, and the knowledge work of each. Additionally, such mapping enables a justification to flatten across human actors, setting aside experience as a degree of distinction, when considering new, experienced, and design participants operating within a digital system. Given these approaches, in the following chapter, I outline the methods of interviewing, task analysis, and mapping that help illustrate a mechanism for conjoining this approach as means to answer the questions in this dissertation.

42 Texas Tech University, Michael R. Trice, August 2019

CHAPTER 3

METHODS The dissertation utilizes a mixed-methods approach grounded primarily in the tradition of Usability Studies in the Technical Communication field but applied to the broader concerns of user experience within online platforms. The goal of the study is to demonstrate how results from participant interviews and usability task analysis can be combined to map complex participant workflows. More specifically, the study examines three questions:

1. In what ways do participants describe the platform environment and

expectations for that environment?

2. In what ways do participants actions align with their expectations for the

platform and its environment? In what ways do they not align?

3. In what ways do expectations and alignment vary by the classes of

participants?

The mixed-method approach is intended to respond to Still’s (2009; 2010) challenges that usability move toward a broader analysis of participant context and ecosystems, as well as Potts’ (2013) challenge that we consider the user experience of knowledge work as open source communities. The approach attempts to answer these calls by exploring what multiple sets of participants imagine/speculate a system to be within their interviews versus how they perform within the applied system in a set of usability tasks designed from the interviews. In Chapter 6, both the speculative and

43 Texas Tech University, Michael R. Trice, August 2019 applied system are mapped using the results from the interviews and the task analysis for each class of participant.

To make this argument useful for the possibility of mapping speculative and applied systems, the study drew data from three types of participants (new, experienced, and design) using two methods: interviews and task analysis. Three types of participants are important to note because part of the value of this study is to evaluate the extent to which speculative and applied systems diverge or agree across classes of participants. Such differences can point to important trends in how adaptable functionality, moderation, and literacy might need to be to accommodate the needs and expectations of all participants.

The present chapter outlines the specific manner in which usability was applied in the study by dividing the approach into three key steps: interviews, task analysis, and workflow mapping. The interview section covers how questions were generated, participant selection, interview process, and coding of responses. The results of the interviews and coding are included in Chapter 4. The task analysis section in Chapter

5 explains what tasks arose from the interviews, participant selection, process of user testing, and what data was examined. Finally, the workflow mapping explains how participant data was used to build workflows and maps from the task analysis in

Chapter 6.

The core methods included recruiting 30 participants (15 for interviews and 15 for usability testing), designing the interview questions, conducting the interviews, creating participant tasks from the responses, administering the usability tests, and

44 Texas Tech University, Michael R. Trice, August 2019 analyzing the usability tests from a qualitative and quantitative perspective to answer the core questions proposed in this dissertation. In the case of both the interviews and usability tests, the 15 participants broke into three sets: five novice participants, five experienced participants, and five design participants. As explained in Chapter 2, a group of 5 participants offers sufficient reliability for identifying 50-80% of usability issues, and 15 participants offers over 95% reliability when all groups are combined

(with an outlier low of 90% of issues). Again, these percentages apply to the discovery of issues and not to deeper comparative issues requiring statistic significance. The participant classes were selected based upon common behaviors within wiki and forums: readers/lurkers, active posters/contributors, and those with programing and design experience. The profiles of these participant types:

o New Participants included individuals who had never used the wiki

before in any capacity and did not have coding experience. They had

used some other wiki previously.

o Experienced Participants included participants had used the Denton

LocalWiki before, but who did not have coding experience.

o Design Participants included individuals who had used the Denton

LocalWiki and had experience with PHP or MySQL.

45 Texas Tech University, Michael R. Trice, August 2019

3.1 Examining the Parallel Role of Interviews and Usability in Examining the LocalWiki Denton

While surveys and interviews are a key part of Usability Studies (Barnum, 2002;

Dumas & Redish, 1993; Hackos & Redish, 1998), a particular use of interviews in evaluating social usability in the study of the Denton LocalWiki was used. An advantage of interviews is they allow participants to construct their own definitions of what the system should be and the goals they expect from the system (Foddy, 1993).

By allowing each set of participants to construct their own array of purposes and context, the interviews allowed the participants to establish their specific definition for the wiki genre in which they operated. Due to both the lack of a clearly defined genre for public wikis and the lack of explicit governance within the LocalWiki Denton, these participant-defined traits seemed especially appropriate to the study.

These definitions then provided a means to generate tasks from the participant base, both new and experienced participants as well as the designers that typically establish such tasks. Allowing participants to define their tasks outside of the environment of task performance allowed the study to establish tasks specific to each user base as social expectation rather than system expectation so those tasks can arguably represent a specific exigence for that base and not simply the desired outcomes of the designers or limitations of the system. The reason for looking at participant expectations beyond system allowance was to better understand how participants incorporated expectations and social goals into the system’s actual limitations.

46 Texas Tech University, Michael R. Trice, August 2019

For example, consider the backchannel wiki trait reviewed in the literature, wherein most wikis offer some form of running discussion commentary about each page and/or about governance more generally. If a participant expected a place of communication with other participants to discuss wiki content and could not find one, would they create one or settle within the perceived limitations of the system? The choice across participants would help define the role and importance of the back channel to particular sets of wiki participants.

In looking for genre traits as emergent properties in social usability, it was vital to allow participants an opportunity to create emergent definitions. However, the goal could not be to capture all emergent goals of all participants. A sample of 15 participants was chosen to link with established usability protocols of multiple five- participant iterations in usability testing to ensure the majority of issues became apparent (Faulkner, 2003; Nielson, 2000). Interview process specifics can be found in

Section 3.3.

Task design was performed only after analyzing the interviews from the three core groups: new participants, experienced participants, and designers. Two of these participant types map closely to Barnum’s (2002) roles in usability testing (p. 90). The new participants in this study are the same as Barnum’s novice participants, while expert participants mapped closely to Barnum’s competent participants. However, design participants were a bit different. These most closely resemble Barnum’s expert participants, but also possess the skills needed to understand the underlying structure of the system at the code level.

47 Texas Tech University, Michael R. Trice, August 2019

Frequently, task analysis means evaluating participants based on the tasks designers or system implementers intended for participants to perform and creating tasks from the top down (Hackos & Redish, 1998). While participants can bring their own goals to the system, these goals are usually derived from the designers’ intent

(Barnum, 2002). However, such a process seemed a poor fit for an open source project like the Denton LocalWiki that lacked a traditional structure of management or governance. Even a balanced approach of placing participant goals and designer goals on par with one another seemed too restrictive given the community focus of the system where participants frequently generate system goals of their own. As stated in

Chapter 1, by the time this study had begun, the Denton LocalWiki effectively had no supervision or organizing body of any kind. Thus, tasks could not be created in the traditional top-down or power-balanced manner. In fact, the emergent needs of participants were arguably not just the primary goals of the system but the only goals of the system by this point. Just as importantly, though, allowing community members to define the tasks spoke more honestly to the social usability goals of the Denton

LocalWiki project and the pursuit of discrete genre definitions.

These interviews also helped with coding categorization of the participants by allowing closely defined categories to self-define issues. The process allowed me to code the participant categories of new-, experienced-, and design-participant based solely on system exposure demographics (novices were those without exposure to the

Denton LocalWiki, experienced participants had exposure but no advanced coding skills, and design participants had both exposure to the wiki and advanced coding

48 Texas Tech University, Michael R. Trice, August 2019 skills). Allowing the tasks to emerge from the interviews of these closely-coded groups allowed me to mitigate making too many assumptions about the participant goals in advance.

The development and analysis of interviews and tasks in their respective sections are explained below. Before describing the process of interviews and task analysis, an analysis of the method of solicitation and selection for both those interviewed and those who performed the usability tests is important.

3.2 Call for Participants

The study consisted of two key phases with 15 participants at each phase. Phase one consisted of the interview discovery phase and phase two involved the participant testing phase.

3.2.1 Participants In both phases, 15 participants were selected: five new participants, five experienced participants, and five design participants. As explained above, I defined each participant type on two factors, coding experience and experience with the LocalWiki

Denton. In making these selections, I was guided by Richard Lanham’s style/substance spectrum from The Economics of Attention.. In the text Lanham highlights how signals and perceivers both function on a at/through spectrum. For signals, Lanham argues that certain media are less expressive and meant to be looked through. These are high content, low awareness signals. Other media are meant to be looked at due to their nature as art or rhetorical object. When dealing with a communication system as complicated as a wiki, it would make sense that different

49 Texas Tech University, Michael R. Trice, August 2019 functionality might exist at different places within the spectrum of at/through. Thus, how participants engage with different functions within the wiki can highlight placement along this spectrum and highlight the attention these elements invoke within sets of participants.

Lanham also places perception along a at/through spectrum. Due to this, how participants read wiki content can also inform how aware they might be of the delivery system versus the content. In this way, we might expect inexperienced participants to more heavily associate wikis with content and outcomes, where more experienced participants might define it by functionality and constraints as well as content and outcomes. The balance between these two types of perception can tell us about the level of sophistication within a participant’s awareness of the system. If this type of categorization held true within classes of participants, it would also help outline how genre traits varied based upon whether participants defined exigence through outcomes, functionality, or both.

The goal of participant selection was to find a cross-section of participants who could look at/through the wiki in different manners. New participants would look at the wiki as they attempted to understand its use. Experienced participants would presumably be more inclined to look through the wiki as they attempted to complete tasks. Design participants would be the most likely to switch between the two, capable of the familiarity needed to look through the wiki but also holding sufficient technical experience to stop and occasionally look at the construction of the wiki. The reason for the focus on MySQL and PHP experience is because they are the most common

50 Texas Tech University, Michael R. Trice, August 2019 languages used for wiki web app development and wiki database management respectively. The goal was to find participants with a skill set that suggested they could look at the wiki structure as well as through it, thus marking a particular level of literacy from Lanham’s perspective as discussed above.

3.2.2 Selection Selection for interviews and task analysis testing occurred separately but followed the same approach. I solicited interview participants first and task analysis participants a month later. In both cases, I relied upon a number of avenues to solicit participants:

• I emailed wiki participants who listed email addresses in their public profiles.

• I posted a call for participation to the local Facebook group for participants

and for editors.

• I posted requests to local universities in the Denton community.

Many new participants came from the third option, while the first two were most effective at contacting experienced participants and design participants as shown in

Table 3.1. Those interested in participating at each phase completed a survey

(Appendix A) and were classified as new, experienced, or design participant based on the survey. I then approached respondents on a first come basis. Those who participated at any phase received a $10 Amazon gift card for participation.

51 Texas Tech University, Michael R. Trice, August 2019

Table 3.1. How participants were selected broken down by outreach method

Interviews New Expert Design Total

Direct Email 0 2 4 6

Social Media 0 1 0 1

Local Outreach 5 2 1 8

Usability Tests

New Expert Design Total

Direct Email 2 4 5 12

Social Media 0 0 0 0

Local Outreach 3 1 0 3

3.2.3 Order of Participation All 15 interviews in phase one were completed, transcribed, and analyzed prior to the beginning of task analysis in phase two. Since two tasks were created from the interviews of each participant group, the hard break between phases was a necessity.

About three weeks passed from the completion of interviews to the start of user testing.

52 Texas Tech University, Michael R. Trice, August 2019

3.3 Phase One: Interviews

Interviews provided the first step in qualitatively defining key aspects of the

LocalWiki Denton system. Because interviews have a long history of empowering participants by giving them the room to define the problem and focus of a study

(Foddy, 1993; King & Horrocks, 2010), interview questions (see Appendix C) were used to help participants define how each set defined their experience with the wiki, what parts of the wiki participants could articulate, what functionalities they could articulate, and what they saw the purpose of the wiki to be.

Mapping these responses helped to identify elements that each role sees as essential when creating usability tests for the second part of the study. The responses were also compared to participant performance in the tasks to evaluate how expectations mapped to task interpretation.

3.3.1 Question Design Open-ended questions were used to maximize participant input. These types of questions help create organic definitions from the user base (Foddy, 1993). The questions focused on how participants conceived of a wiki and what tasks they expected writers and readers of a wiki to perform. The interviews invited questions on purpose, functionality, and content, but stopped short of directing any specific categories within those three areas. There were no questions about specific wiki traits as defined in Chapter 2. Rather, the questions sought to see whether groups generated elements fitting these groups based on the broader categories of purpose, functionality and content.

53 Texas Tech University, Michael R. Trice, August 2019

The complete list of questions across all participant types was:

• What makes a good LocalWiki entry?

• What is the key to successfully making a good edit on LocalWiki?

• What do you feel is the purpose of a local wiki?

• What makes a good wiki entry?

• What features do users use most on a LocalWiki?

• What features do you think are most important on a wiki?

• What features do you think are least important?

• What problems would you expect on a local wiki site?

• How do you resolve conflicts?

• How do users collaborate on a LocalWiki?

• How do users resolve conflicts?

• What do users do on a local wiki site?

• Which features do you feel work best?

• Which features need the most improvement? The collection above represents the total set of questions across sets of participants.

The specific questions for each user group is covered in Chapter 4 with the responses and analysis of interviews. As demonstrated above, the interview process was highly structured. The method ensured that the interview process would remain consistent across multiple methods of delivery: email, chat, and phone. Since not all respondents could be available for synchronous interviews, the structured response offered a reasonable tradeoff of accuracy over breadth. Had interviews been the primary result of the survey over the task analysis, the structured responses would have been less optimal.

54 Texas Tech University, Michael R. Trice, August 2019

3.3.2 Interview Process Most interviews were conducted by sending participants questions via email. This was done to elicit a wider range of responses across all sets of participants. Since there was no workplace or physical location where participants congregated, arranging meetings proved prohibitively difficult as participants working on a voluntary basis felt little incentive to arrange sit down meetings. Additionally, this email method lowered the bar substantially for participation, which spoke directly to the desire to include individuals who might more traditionally be considered lurkers as opposed to heavily invested members of a wiki community. Three participants did provide interviews via phone or Skype sessions. These interviews did provide more data on average, which was considered in coding the interviews during analysis.

3.3.3 Interview Analysis Interviews were coded by looking for activities and functions within the interview.

Then each mention was graded as either negative, positive, or neutral in participant context. Two people independently coded each interview and then the results were negotiated to reconcile any disagreements. The final coded segments were then clustered into types and two tasks from each group (new, experienced, and design.) were then designed from these clusters. For example, the first task is locating information on a specific page with partially missing information. The task was designed from new participants’ strong belief that a wiki’s purpose to provide information and to be read.

55 Texas Tech University, Michael R. Trice, August 2019

The coding scheme relied on descriptive topic coding that looked for functions and attributes describing the wiki and process coding that looked for gerunds. Both processes were informed by Saldana’s (2016) review of qualitative coding practices— namely descriptive topic coding in round one and process coding for actions in round two. The third round of coding looked at the sentence level responses for each function/attribute and process to code for whether the participant assigned a positive, negative or neutral response to the coded term. The goal here was to help inform task formation by loosely identifying goals and obstacles anticipated by the interviewees.

As can be seen above, certain questions were dedicated to this problem/purpose model as well in case coding could not effectively identify examples.

Once the interviews were coded and categorized for each set of participants, two key tasks were derived from each participant set based in the responses and the goals of this study. The results of that process are described in Chapter 4.

3.4 Phase Two: Usability Testing

The study used Morae software to record pre-test surveys, post-task surveys, observe six tasks, and record qualitative and quantitative metrics as described in the Test Plan

(Appendix B). The six tasks (detailed in Chapter 4 and 5) were created based on the participant responses across all three roles from the interviews. The goals of the tasks were to determine if significant differences in usability can be traced through time on task, clicks, failure rates, and descriptive System Usability Scale (SUS) scores

(Bangor et al., 2009). In addition, the study used two qualitative measures to evaluate participant performance: a) the study used a task analysis grid to mark performance

56 Texas Tech University, Michael R. Trice, August 2019 measurements for errors in problem-solving choices (Dumas & Redish, pp.184-186), and b) the study mapped the pathway of user workflow (Hackos & Redish, pp. 264-

265) through tasks to determine ways in which participants navigated the wiki and used its various elements to solve the tasks. The six tasks are listed below, while the justification for their selection constitutes the majority of Chapter 4:

1. Find the Hours for Open Mic Night at The Garage

2. Change the Date Found on “The Garage” Page

3. Add a Link from “The Garage” Page to any Other Page

4. Find the Businesses on The Square

5. Add a New Page for a Location to the Wiki (Can be a Fake Location)

6. Address Any Concerns You Have with the Cranky Goose Page

The combination of metrics allowed comparison for how different classes of participants perform at tasks preferred by each of the three classes. Qualitatively, the study reaches reliability by meeting usability standards in numbers of participants and cross-checking interview data with performance data. Validity is reached by following standard usability processes as listed within the test plan in Appendix B.

3.4.1 Why Use SUS SUS scores are most reliable at measuring perceptions of usability (satisfaction) rather than identifying causes of usability issues (Lewis & Sauro, 2009).While questions exist about what SUS measures with regard to the technology itself and to what extent, it remains widely used in usability with recent studies looking at mobile apps (Kaya et al., 2019), assistive technology (Friessen, 2017), and medical equipment (Soares,

57 Texas Tech University, Michael R. Trice, August 2019

2018). SUS offers a key means to empower participant perceptions of a system, which empower participants similarly to how interviews allow participants to self-define the issues that matter most to them. Indeed, this study applies SUS to measure extreme perceptions of usability. In particular, the study sought to highlight SUS scores that averaged more than 85% or less than 50% as substantially deviant from the 70% average expected (Sauro, 2011). One key reason is to see if participants perceive a system as usable if the applied system differs substantially from the speculative system. For instance, if a participant perceives that a moderator should perform key functions, but that participant cannot find functionality that supports moderator intervention, does this affect the perception of usability? Or does the participant simple adapt expectations on the fly? In this manner, the study is not concerned with identifying a usability issue but only identifying whether the participant perceived a positive or negative experienced based on unmet expectations for that class of participant.

It is worth mentioning that researchers have noted limitations in SUS to gauge absolute satisfaction. Barnum and Palmer (2011) note that participants tend to inflate scores in satisfaction in posttest scenarios as a people-pleasing action, even though comparative SUS scores can maintain validity. While some studies have questioned the extent of SUS inflation (Kortum & Bangor, 2013), most acknowledge that the validity of SUS rests firmly in its ability to offer a “quick and dirty” (Brooke, 1996) evaluation across a base of participants or across websites (Barnum and Palmer, 2011;

58 Texas Tech University, Michael R. Trice, August 2019

Berkman and Karahoca, 2016)—or even divergent delivery systems for the same goal, such as voice versus text (Brooke, 1996).

In the case of this study, the goal was purely comparative across classes of participants and across tasks. The purpose of the SUS was to see if classes were biased one way or another toward the LocalWiki Denton in perception and to judge if that perception matched application within the LocalWiki Denton environment based on performance. As stated earlier, the study sought to highlight SUS scores over 85% or less than 50% as significantly positive or negative. Quantitative and qualitative performance metrics were then used to see if SUS perceptions helped illuminate either why certain groups performed better or why participants established the workflows they did for each task.

3.4.2 How Many Participants The goal of 5 participants for each role was specifically chosen to provide reasonable accuracy for the usability testing by ensuring a baseline of 15 participants and then a reliable evaluation of major issues for each class of participant (Faulkner, 2003). The 5 participant classes have been deemed sufficient to catch about 85% of major issues for that participant group and the larger set of 15 participants can capture about 99% of major issues (Faulkner, 2003). The use of 5 participants does have some division in the field. Nielsen suggests iterative uses of 5 participants to establish 15 participants overtime, whereas Faulkner suggests a collection of participants in the same testing period to hit the level of reliability required (85% to 99%). In this case, participants were tested on the same instance of the wiki, resulting in 15 total participants, but only

59 Texas Tech University, Michael R. Trice, August 2019

5 participants per set. This made the subsets less reliable than the total group, but still capable of capturing 85% of participant errors for the set. The use of subsets is also well established in Usability Studies as a means to capture a wider variety of issues

(Barnum, 2003). The sets for this study are explained in Table 3.2.

It is worth noting that the usability testing means the form of literacy privileged in this study is that literacy capable of being created through the wiki’s participant interface, versus coding literacy or social organizational literacy. Thus, the study looked primarily at the content held within the database fields and how that is generated, arranged, and consumed within the public user interface of the wiki system.

In considering interaction at the content level, one advantage of so many combined participants is that it grants a more thorough mapping of the entire participant ecosystem via workflows, and thus a better understanding of all literacies in comparison to expectations. That is to say that the while more errors can be caught with participants, so too can more choices and non-error variances be captured. The variance increases the range of workflows observed and helps better define workflows as an organic class within an anarchic community.

Table 3.2. Size of each group and effectiveness for capturing major issues. Number of sets Members per set Effectiveness

Individual 15 1 Only useful as observational data for exceptions Group 3 5 85% of major problems for group Whole 1 15 99% of major problems for population

60 Texas Tech University, Michael R. Trice, August 2019

3.4.3 Testing process Once the interview data was analyzed, six tasks were created: two from each class of user. Six tasks were selected because that would be the most possible within a reasonable timeframe of 40 minutes per participant. Participants were pulled from the same population as interviews, though not necessarily the same individuals. However, given the small population size, some crossover did occur.

Usability testing was observed and analyzed via Morae. The tests included a pre-survey, the six tasks listed in 3.4, short surveys after each task, and a brief retrospective recall session after the test to allow for a more natural task environment

(Haak & Jong, 2003). Due to complications around privacy issues, participants were not recorded beyond the session screencasts. However, the actions of the participants were recorded on video and screencasts made of their performance with the system.

Notes were also taken during observation and during retrospective recall.

The test used the live installation of the LocalWiki Denton site. Testing occurred in the field at libraries, classrooms, and personal spaces. The testing in personal spaces was done via Google Hangout as a remote moderated test. This was seen to be a minor change as synchronous remote testing has been repeatedly proven to be as effective as in-person usability testing (Tullis et al., 2002; Andreasan et al.,

2007).

Survey scores were based on a Likert scale and used primarily to determine satisfaction when a participant is undertaking a task outside of that participant’s role, especially looking at variances beyond 15% of 70% expected average. In addition, core metrics of task time, failure rate, and clicks were measured. Additionally,

61 Texas Tech University, Michael R. Trice, August 2019 workflow maps were generated based on participant choices and key markers for optional features used were evaluated. Features the study coded for included use of templates, commenting, body text editing, search, maps, tags, and tables. In addition, the study marked when participants demonstrated the need to go offsite to find incomplete information. These markers arose jointly from an examination of the functionality of the LocalWiki Denton and from the participants interviews conducted.

In summary, the following items were recorded for evaluation in the usability portion of this study:

• User workflow (path and choices)

• Participant option functionality choices

• Time on task

• Failure rates

• Clicks

• Retrospective recall responses

• Screencasts of user interaction with system

• SUS scores

• Observer notes

3.4.4 Evaluation Process Since the focus of the study is on one system, the results are very much about the design structure of the LocalWiki system and how participants and designers function within that system. The methods are customized to best fit an open-source, non-

62 Texas Tech University, Michael R. Trice, August 2019 directed system of civic engagement. That said, some key concepts of usability should still apply.

Mirel (2008) lists three key issues in evaluating the usability of complex systems:

• Participants are confident in work that is inaccurate.

• Participants misapply tool capabilities that results in suboptimal results.

• Participants do not deem a system valuable and therefore stop using it.

All of these issues, especially the last, are major issues for sites like LocalWiki that lack central designers to maintain the site and rely upon participants learning the system well enough to keep it updated and appealing.

Redish (2010) expands upon this point to suggest that complexity is rhetorical, reminding us that what we think of as complex may not be complex to the audience and vice versa. Such awareness of the rhetorical nature of complexity once again highlights the importance of Lanham’s (2006) oscillation of signal and perceivers moving between at/through within this study as a means to compare participant groups. Thus, the study sought to avoid making assumptions about optimal solutions for any given tasks to see which group actually performed a task better.

The comparisons between groups took on four categories (see Table 3.3): performance metrics (time on task, failure rates, clicks, etc.), satisfaction metrics

(SUS), workflow development (path through task, use of functions, and retrospective recall), and engagement with optional system functions (editing, commenting, tags, and templates). For each of these four categories there are actually 19 points of

63 Texas Tech University, Michael R. Trice, August 2019 comparison: individual performance (15), group performance (3), and overall performance (1) to examine.

Table 3.3. Categories for the usability study and the metrics used to evaluate each category. Performance Satisfaction Workflow System Transparency Time on task SUS Task path Optional functionality used Failure rates Retrospective recall Choices made Retrospective recall responses Clicks Observer notes Choices rejected Screencast Failure rates Screencast

The overall performance provides a baseline for evaluating the competency of each of the three groups while looking at how individual participants can highlight certain observational variances that might be lost in assimilation. This process will allow the study to answer its three core questions by defining literacy as an outgrowth of these four categories (performance, satisfaction, workflow, and system transparency) that map to Mirel’s concerns. Then the study compared these results between groups without privileging any given group to address Redish concerns about the rhetorical nature of usability and to evaluate social usability in the system by prioritizing multiple sets of participant goals.

Finally, the study mapped participant workflows through the tasks and across groups. The goal here was to create both maps of the participants’ workflows and

64 Texas Tech University, Michael R. Trice, August 2019 actor-network maps for each group to help illustrate whether the wiki itself shifted as space (genre and networked system) based upon the category of participants. This final step relies upon the work of Potts and Jones (2011) in order to expand upon questions about how usability can better map participant ecology (Still, 2010) and how content management workflows influence participant exigence (Hart-Davidson et al.,

2007.).

These maps of how participants construct the wiki space will also offer more focused analysis when examining how usability can explore speculative and applied systems created by different classes of participants in Chapter 6. Prior to that, Chapter

4 and 5 will explore the results of and analysis of the interviews and task analysis.

65 Texas Tech University, Michael R. Trice, August 2019

CHAPTER 4

TURNING INTERVIEWS INTO TASKS The chapter introduces and overviews the 15 interviews conducted in the early stages of the study. The sequence of examination moves from interview questions to interview population to interview results to a review of tasks designed from the interviews. The analysis of the interview responses concludes with an explanation for how the tasks for the task analysis covered in Chapter 4 were designed.

Two concerns guided the interview process above all. First, the study needed a method that would empower wiki participants to offer a means to articulate definitions and values about the wiki’s purpose and construction. This issue was particularly important for new participants, who rarely have an opportunity to articulate design and purpose outcomes within Usability Studies focusing on task analysis.

Second, the interviews also needed to result in answers that could be applied to the main questions of this study. Due to the fact that the study did have concrete initial questions, interviews were not intended to offer a completely free-willing and generative set of responses from the interviewees. The goal was to present a specific set of categories that needed definition and values assigned to them and allow each group of interviewees to fill in the specific definitions and values as they were able.

For example, new participants within the wiki would not know its capabilities or functionality in any way, thus questions needed to be guided in a way that would address the actual test environment while still allowing these participants to challenge the goals of that environment as much as possible. While experienced and design

66 Texas Tech University, Michael R. Trice, August 2019 participant were more familiar with the environment, the questions needed to offer this same flexibility of definition to those types as well.

4.1 Questions

The list of questions for each population sought to best elicit the actions and content that group thought most appropriate for a local community wiki. The variant experiences of the groups meant slightly different questions for each. New participants were asked questions about what they might expect from a local wiki, experienced participants were asked what they experienced from the LocalWiki Denton, and design participants were asked questions about both types to best elicit a range of responses to take full advantage of their knowledge of the system. The questions for each group are listed below.

4.1.2 New Participant Questions 1. What do you feel is the purpose of a local wiki?

2. What would/do you do on a local wiki site?

3. What problems would you expect on a local wiki site?

4. What features do you think are most important on a wiki?

a. What features do you think are least important?

b. What are your favorite features on any wiki you have used? Why?

c. What are your least favorite features? Why?

d. How do you collaborate on wiki?

e. How do you resolve conflicts?

f. What makes a good wiki entry?

67 Texas Tech University, Michael R. Trice, August 2019

5. Do you have any experience with programming or maintaining a database?

a. If so, what is your background?

b. Have you programmed for a wiki or maintained a wiki database?

6. How would you define your level of experience with a wiki?

7. Do you have anything else to add?

4.1.2 Experienced Participants Questions 1. What do you feel is the purpose of a local wiki?

2. What would/do you do on a local wiki site?

3. What problems would you expect on a local wiki site?

4. What features do you use most on LocalWiki Denton?

a. What features do you use least?

b. What are your favorite features? Why?

c. What are your least favorite features? Why?

d. How do you collaborate on LocalWiki Denton?

e. How do you resolve conflicts?

f. What makes a good LocalWiki Denton entry?

5. How did you find LocalWiki Denton?

6. Do you have any experience with programming or maintaining a database?

7. How would you define your level of experience with a wiki?

8. Do you have anything else to add?

4.1.3 Design Participants Questions 1. What do you feel is the purpose of a local wiki?

68 Texas Tech University, Michael R. Trice, August 2019

2. What do users do on a local wiki site?

3. What problems would you expect on a local wiki site?

4. What features do users use most on a LocalWiki?

a. What features do they use least?

b. What are your favorite features? Why?

c. What are your least favorite features? Why?

d. How do users collaborate on a LocalWiki?

e. How do users resolve conflicts?

f. What makes a good LocalWiki entry?

5. How do users find LocalWiki Denton?

6. Do you have any experience with programming or maintaining a database?

a. If so, what is your background?

b. Have you programmed for a wiki or maintained a wiki database?

7. How would you define your level of experience with ?

8. What are the key features of the LocalWiki system?

9. How does LocalWiki help users create a community space?

10. Which features are critical for users to use LocalWiki?

11. Which features do you feel work best?

12. Which need the most improvement?

13. What is the most important aspect of LocalWiki for a user?

14. What is the key to successfully or make a good edit LocalWiki?

15. Do you have anything else to add?

69 Texas Tech University, Michael R. Trice, August 2019

4.2 Population

The initial outreach for interviews resulted in 30 completed surveys for participation.

Of those 30, 25 participants expressed interest in serving as an interviewee. Of those surveyed who wished to participate, 5 qualified for evaluation as design participants,

10 for new participants, and 9 for experienced participants. This population was achieved after four months of outreach via email, social media, and posted notices in libraries and local universities.

The only population that proved problematic was the design participants demographic. While 5 people demonstrated qualified interest, only 3 eventually participated. This resulted in a compromise for the design participant population where two of the actual designers for LocalWiki were asked to participate as part of the group. While they technically met the requirements for the population, their experience with the system also exceeded even other design participants in key ways.

In addition, the final population of interviewees skewed heavily male with women constituting only 3 of 15 participants. Aggressive outreach to increase the number of women was attempted but unfortunately fell short. These numbers did improve during the second portion of the task analysis phase (as is discussed in

Chapter 5), but only marginally to 5 of 15. The key reason for this disparity is that no women who responded to the outreach qualified as design participants. The discussion in Chapter 7 will address how outreach might improve upon this key area in future studies.

70 Texas Tech University, Michael R. Trice, August 2019

4.2.1 New Participants

The new participant population consisted of 2 women and 3 men. None possessed any experience with the LocalWiki Denton nor any incarnation of the LocalWiki system.

That said, all 5 had previous wiki experience, ranging from Wikipedia to game wikis.

All 5 had visited a wiki multiple times over the past three months, but none had edited a wiki. Not only did the group exhibit no experience with the LocalWiki Denton, their profile of visiting wikis but not editing also suggested a lurker/reader style of behavior within wikis that they had used previously.

4.2.2 Experienced Participants The experienced participant population consisted of 4 men and 1 woman. They all had been using the LocalWiki Denton for at least a month and each had visited the site within the last three months. Only 2 of the 5 reported having edited the LocalWiki

Denton, though all had read information from the site. None had any programing or coding experience. All 5 stated they also used Wikipedia, though none reported editing Wikipedia in the past.

4.2.3 Design Participants The design participant population consisted of 5 men. Of the 3 design participants who completed the survey, all had experience with the DetonWiki and experience with key technical aspects of wiki design, such as PHP or MySQL experience. They each had also visited the site multiple times in the last three months. Additionally, all three had edited a wiki previously. Each had edited Wikipedia and 2 of the 3 had edited the

71 Texas Tech University, Michael R. Trice, August 2019

DetnonWiki. Of the 5 total design participants, all had editing experience and all had edited Wikipedia and 4 of 5 had edited LocalWiki Denton.

4.3 Results

The interview process occurred over three months of contacting survey participants and arranging interviews via email, phone, or chat. Once interviews were collected, the responses were evaluated for descriptions of purpose, problems, and functionality.

In addition, each response for purpose, problem, and functionality was identified as positive, negative, or neutral in nature. The coding was done in three parts. I coded one round, then I paid a second coder to code from my code book. We then negotiated differences. This section reports on the results from that coding process.

4.3.1 New Participants New participants primarily saw the purpose of a LocalWiki as a means to provide information. Largely, what information was provided was left vague, though they tended to identify places, events, and localness as key features. In addition, new participants overwhelming identified search as the key activity of the wiki, with only one participant mentioning adding content as a something you do on the site. They also felt malicious inaccurate information was the main concern for a local wiki, describing such possibilities in terms of exaggeration, trolling, and joke articles.

New participants offered a broad selection of likes and dislikes about functionality, but a handful of commonalities appeared. Ease of navigations and internal links were frequent preferences of new participants. Additionally, new participants felt what made a good wiki entry was accurate and concise information.

72 Texas Tech University, Michael R. Trice, August 2019

As for how disagreements over content in the wiki might be settled, new participants appealed to the role of moderators as necessary for resolving disagreements.

Table 4.1. New Participant Interview Themes Participant Purpose Participant Problems Functionality Actions One To get direct, Search for Embellished facts Easy navigation, factual information, organized information local information, ability businesses, to flag restaurants. misinformation (p) Two To inform about Add locations Conflicting Easy navigation, places (eat, shop), opinions, joke linking articles, parks articles search bar, need to improve articles

Three Provide Search, read Inaccurate info or Featured articles, searchable articles trolling articles advanced search database for local relevant to options, related events, traditions, community articles, hyper links, places of interest external links/sources Four Inform them Search None Article outline after about local things specific info introduction, organization of information, easy to localize Five Provide What events Site bombing Current info information for are happening (negative posting people new or after bad visiting town from experience at a locals location)

4.3.2 Experienced Participants Like new participants, experienced participants saw the wiki as providing local information. However, whereas new participants referred to information broadly

(except for places and events), experienced participants had slightly more specific takes on what the information should be. Experienced participants commonly referred

73 Texas Tech University, Michael R. Trice, August 2019 to organizations, businesses, community, history and events. These categories, while still quite broad, showed a narrowing from ideas like “information” and “things”.

However, experienced participants still mainly saw the purpose of the wiki as one of

“finding.” While again listing more types of information to find than new participants, experienced participants did not expand the nature activities that one does on a wiki.

As for the problems that a wiki faces, experienced participants still found accuracy to be the key concern. However, malicious acts were not a key concern.

Rather, experienced participants focused on whether information was out of date, sufficiently detailed, or properly local. In this case, more practical concerns about accuracy appeared to override any fears of trolling or fake articles.

Experienced participants tended to list specific pages that they liked on the site when asked about preferred functionality. These were often nexus or navigation pages: recent changes pages, “Things to Do in Denton” page, and “The Square” (which links to dozens of businesses and organizations). Problematic functionality came down to two actions: the interactive map and adding a new page. Both were seen as infrequently used.

Experienced participants expressed a surprising lack of engagement around collaboration and editing. Only one felt confident talking about the editing process or what made for good edits. That said, experienced participants had strong opinions about the need for a good wiki article to have sufficient content, including images, robust text, and links to internal and external pages.

74 Texas Tech University, Michael R. Trice, August 2019

Table 4.2. Experienced Participant Interview Themes Participant Purpose Participant Problems Functionality Action One Provide Look up local Determining Edit pages information of information what is local local interest (organization, clubs, schools) Two Connect people, Look up Out of date info, Links, “Things to inform about events, inaccurate info do Page” community activities, food/drink specials Three Inform locals of Find out about Vandalism, Specific pages (Fry events, origination, outdated Street, The Square, businesses, restaurants, information Denton Laws Made organizations events; easy), ‘Recent contribute by changes” page correcting pages Four Make info about Discover new Lack of content Articles to read, an area open to places to visit, embedded links to people events to articles unfamiliar; attend, outlines places, understand events, history culture Five Media source by Read, edit, Educate users on “Recent changes” the people, have fun use, get people to page avoids bias as edit content much as possible, accurate portrayal of city from perspective of people

4.3.3 Design participants Across both types of design participants, the purpose and activity of a local wiki expanded dramatically. More references to documentation, collaboration, and preservation occurred throughout the interviews. When it came to purpose, sharing information exceeded finding information as a goal. The ability to create a specific kind of knowledge arose from the interviews. Similarly, when it came to discussing

“what users do” on the site, the answers moved from find to build, add, collaborate,

75 Texas Tech University, Michael R. Trice, August 2019 upload, and other content creation actions. It is worth noting, that all design participants happed to also list recent wiki editing experience in their surveys, thus these responses likely capture that attitude as well.

Design participants noted concerns over accuracy due to group editing, but they also highlighted more general awareness of participation and organizational concerns. Key questions about how does a wiki gather enough contributors to remain viable arose, as well as the nature of any communal project to result in some level of disagreement over content.

When it came to functionality, design participants agreed that links and editing were necessary. That said, design participants were quite split on likes and dislikes within the system. Category tags, the interactive map, and formatting icons were divisive functionality, often with one participant championing that functionality and another suggesting it was least liked or used. What unified the responses was a deep listing of the site functionality even as opinion about discrete elements remained jumbled.

76 Texas Tech University, Michael R. Trice, August 2019

Table 4.3. Design Participant Interview Themes Participant Purpose Participant Problems Functionality Action

One Way for people Add Open editing: Interactive map, to write their information, false information, page editor, tag own history photos lies, insults feature Two Community to Contribute, Motivate users Map, links, describe itself explore edit/create page Three Record Write about Participation Links, formatting information what they menu, templates, think is create page important Four Upload what Connect with Disagreements Editing a page, they know, find community, adding links, out about read, learn, uploading photos, community upload login options, tags Five Access Build, educate, Deleting Easy log-ins, knowledge about collaborate information, editing an area adding false functionality, information

4.4 Building the Tasks

Based on the results above six tasks were designed as an attempt to capture the concerns of each group. The tasks and reasoning behind the task are explained in the following sections. Note that some of the preliminary results regarding how the tasks were generated and a more truncated discussion of new and experienced participant responses were discussed in a previous article (Trice, 2016).

4.4.1 Task One: Find the Hours for Open Mic Night at The Garage [New] The opening task for the usability test examines the ability to find a specific event in the LocalWiki Denton. However, it is a task complicated because it asks for information that is incomplete in the wiki. The Garage page (Figure 4.1) states the open mic night is Monday but does not give a specific time. This leaves room for

77 Texas Tech University, Michael R. Trice, August 2019 participants to make some choices about how to use limited information in the site. Do they accept the limited information or seek additional details outside the wiki?

Figure 4.1. Screenshot of Task 1. The highlighted text reads: “Open Mic Monday every week.” The task arose primarily as a response to the new participant data from surveys. When asked the purpose of a local wiki, new participants stated the wiki should provide and inform, and they also stated participants should search, read, and get information. There was no feedback about participant contributions, so tasks centered on obtaining information about businesses and events were designed. Task 1 and Task 4 both reflect this goal in different ways. However, since accuracy of information was a concern across all groups, this task incorporates partial information to see what effect it has on participants.

78 Texas Tech University, Michael R. Trice, August 2019

4.4.2 Task Two: Change the Date Found on “The Garage” Page [Experienced] This task embodies basic editing of the wiki. The participant is asked to edit The

Garage page (Figure 4.2) by changing the generic date found on the page. The table for the page has the default entry of “Date, i.e. 1st January, 1900.” The body text of the page also provides a rough year for the last time the bar changed ownership, though the task does not ask participants to be accurate.

Figure 4.2. Screenshot of Task 2. The highlighted text indicates the table entry for date founded. This task aligns with experienced task use at the far edge between experienced and design participant. Only two experienced survey respondents said they edited the

LocalWiki Denton on a regular basis and only one had done so recently. However, experienced participants regularly pointed out the need for robust information. Design participants responded more favorably to editing as a regular part of using the wiki.

79 Texas Tech University, Michael R. Trice, August 2019

However, experienced participants did name links and editing as key features for the wiki. This task then looks at measuring if there are design issues that keep experienced participants from making the contributions that express as needed but also seem reluctant to undertake.

4.4.3 Task Three: Add a Link from “The Garage” Page to any Other Page [Experienced] This task asks participants to create a hyperlink on The Garage page (Figure 4.3). The task does not specify how the link should be created and the wiki has multiple options.

The participants experimented quite a bite with the task as I discuss later.

Figure 4.3. Screenshot of Task 3. The link button can be seen as the first icon to the right of the styles dropdown menu. Four of the five experienced participants responded to survey questions with responses that touched on the importance of links to navigation and categorization. In addition, new participants highlighted the importance of ease of navigation even if they didn’t specify how that navigation might work. Thus, this task was added as an experienced

80 Texas Tech University, Michael R. Trice, August 2019 participant task. Since both experienced participant tasks were editing tasks, I opted to group them together. The link task being the more complicated task, I wanted to have it immediately follow Task 2 so that participants could immediately rely upon what they had learned editing in Task 2.

4.4.4 Task Four: Find the Businesses on The Square [New] This is a follow up task to find a category of pages based upon a shared location

(Figure 4.4). Two pages contain this information and so the task also allowed a way to measure how content duplication might shape participant navigation in the wiki.

Beyond the two content pages that held this information, the wiki map system could be used to complete this task.

Figure 4.4. Screenshot of Task 4. The Square page is shown with List of Businesses header highlighted. The three most common types of content that new and experienced participants expected in the wiki were places, history, and events. The Square is the historic town

81 Texas Tech University, Michael R. Trice, August 2019 square of Denton and contains a variety of commercial businesses and cultural venues.

As the key commercial, cultural, and governmental area, it seemed a good follow up search after Task 1’s search task. The complicating factor for this task arose from the multiple places the information could be found. There was also the possibility that the wiki map function could be used as part of the task; however, two of five experienced participants listed the map as the feature they used least on the wiki. So the survey responses suggested it would be worth having a task that evaluated participant navigation choices to see if any participants included the map in their workflow.

4.4.5 Task Five: Add a New Page for a Location to the Wiki (Can be a Fake Location) [Design] This task asks participants to add a new page to the wiki. Adding a page (Figure 4.5) can be a simple task or a complicated task based on what a participant chooses to add.

The basic process can be completed in a matter of seconds. More than whether participants completed the task, the question became how they completed the task. Did they add templates? Did they edit the table? Did they document the creation in comment history? This allowed the task to truly represent a design level view by asking how many elements were incorporated into the page creation process by each participant.

82 Texas Tech University, Michael R. Trice, August 2019

Figure 4.5. Screenshot of Task 5. The figure shows the template options when creating a new page.

The creation of new content offered the logical advance from editing an entry.

Design participants also had marked differentiations in what they thought the purpose of a local wiki was supposed to be. They introduced terms like “record, broadcast, and write” as opposed to the more consumption focused terms of new and experienced participants. Every design participant also universally listed contributions as part of

“What do you do on a local wiki?” Formatting, tags, and categories were also listed as key components by designers and the most appropriate place to test familiarity with these elements is in the page creation process.

4.4.6 Task Six: Address Any Concerns You Have with the Cranky Goose Page [Design] This task involved a fake page seeded into the wiki. The Cranky Goose (Figure 4.6) offered a description of a local business with poor grammar and obvious bias. The

83 Texas Tech University, Michael R. Trice, August 2019 hostility of the post was intended to trigger concerns about trolling and vandalism.

Rather than explain what concerns a participant should have, the tasks allowed the participants to analyze the page and make what corrections they saw fit.

Figure 4.6. Screenshot of Task 6. The highlighted text reads: “Don’t go hear just awful.” This task had a high conceptual element to it. Four of the new participants surveyed and two of the experienced participants listed some concern over inaccurate or hostile content as a primary concern for what could go wrong with a local wiki.

Design participants did not see this as an issue, stating that edits would keep such behavior in check. When asked how such conflicts over controversial content could be resolved, new participants listed moderators as the solution, while experienced and design participants either stated they had not experienced such conflicts or that they would be resolved by collaborative and serial editing. Thus, this task sought to see

84 Texas Tech University, Michael R. Trice, August 2019 what different participant types would do when presented with hostile activity in the wiki.

85 Texas Tech University, Michael R. Trice, August 2019

CHAPTER 5

USABILITY TESTING RESULTS The chapter outlines the raw findings and basic trends from the study that will be examined in detail in Chapter 6. These results address five key areas: demographic data from the task analysis participants, descriptions of rejected data, how the surveys data was used to inform task design, an overview of overall performance in the usability tests, and a qualitative description of emergent task trends and workflows outside the performance numbers. To help parse the results, several metrics are shown as collective averages of the four groups evaluated (overall, new, experienced, and design participants). These averages are meant to offer an easy way to evaluate initial impressions of group performance before I look more deeply at quantitative and qualitative verification in the analysis. That said, some areas are broken down to include individual participant data as deemed appropriate for the purposes of depth an accuracy. Chapter 6 also delves more deeply into individual participant performance, though the raw data for individual participants is also available in Appendix A. It is important to understand that averages as displayed in this chapter reflect a summary of the testing that will be analyzed in Chapter 6, but these summarized trends are more informative than simply listing all performance data at the participant level given that this study is meant to compare classifications of participants even though individual participants might have substantial deviation in behaviors within a given classification.

86 Texas Tech University, Michael R. Trice, August 2019

In brief, the results reflect the methodology described in Chapter 3. The results in this chapter are gathered from 15 usability tests designed from those surveys. The body of participants are not identical between the two groups, so I provide demographic data for both survey participants and usability test participants in section

5.1. The results do not include four usability tests that I threw out for reasons explained in section 5.3. It is worth explaining at this stage that those four tests were tossed prior to any analysis and synthesis of data, so they did not influence the collection, coding, or analyzing of data.

Additionally, it is worth revisiting my key questions for this study to help illustrate why the trends that follow are the ones worth reviewing. The questions in this study are:

1. In what ways do participants describe the platform environment and

expectations for that environment?

2. In what ways do participants actions align with their expectations for the

platform and its environment? In what ways do they not align?

3. In what ways do expectations and alignment vary by the classes of

participants?

The analysis will look at how common workflow trends matched the survey data to help define the wiki as a genre in the terms discussed in Chapter 2. However, a deep reading of what participants are attempting to do must inform these performance metrics. Often new, experienced, and design participants interpret the steps needed to

87 Texas Tech University, Michael R. Trice, August 2019 complete a task quite differently. Many times this was a result of a different critical awareness on the part of experienced and design participants that make them more likely to inflate time on task and click rates by engaging in a more complex understanding of the task assigned. However, critical blind spots are not universal to one group. As the results for failure rates in Task 1 indicate (Table 5.7), new participants can also express critical awareness lacking in experienced and design participants, though this is a rarer phenomenon. It is possible that different participants experience the social act of a wiki in different ways, adjusting behavior to match purpose specific to that class of participants needs. One element the analysis will address is whether any such split of purpose warrants considering the wiki genre as something split, or layered, based upon participant types.

5.1 Defining the Groups

This section looks at the demographic data collected for survey participants and usability tests participants. While the survey results have been reported already in

Chapter 4, it is worth exploring some of the demographic trends for both populations side-by-side.

I have binned the data into the four pre-defined groups for both survey participants and test participants: total participants, new participants, experienced participants, and design participants. A general description of the demographic data is provided here, but much the analysis regarding demographics is saved for Chapter 6.

The demographic data for the surveys focuses on gender and technical experience. The primary goal of the survey was to reach members of each group to

88 Texas Tech University, Michael R. Trice, August 2019 help build the task definitions while testing gender representation in the groups. I tracked gender in order to balance the population as much as possible given the known issues of female participants in wikis and wiki studies (Thom-Santelli, Cosley, & Gay,

2009; Lam et al., 2011). Table 5.1 shows the demographic data for the survey results.

Table 5.1. Survey Demographics. Men Women Used Used in Edited in LocalWiki Last Last Denton Month Month OVERALL 12 3 10 7 3 NEW 3 2 0 0 0 EXPERIENCED 4 1 5 3 0 DESIGN 5 0 5 4 3

Used Used in Edited in Any Wiki Last Last Month Month OVERALL 15 14 4 NEW 5 5 0 EXPERIENCED 5 4 1 DESIGN 5 5 3 .

The lack of female respondents in the survey phase resulted in more prolonged and extensive outreach standards for usability testing. Beyond the gender ratio, I would note again the high rate of wiki use and low rate of wiki editing experience, particularly in the “Any Wiki” categories.

By comparison, Table 5.2 shows the demographic data for the usability tests.

At this point, the focus changed from past wiki activity to community involvement.

The reason for this shift was to verify those involved in the tests were active community members. Thus, the emphasis evolved from watching wiki expertise to community expertise. However, I still confirmed all new participants had not used the

LocalWiki and that all experienced and design participants had used the wiki.

89 Texas Tech University, Michael R. Trice, August 2019

Table 5.2 . Usability Test Demographics. Age Men Women 18-25 26-35 36-45 OVERALL 10 5 11 2 2 NEW 2 3 4 0 1 EXPERIENCED 3 2 4 1 0 DESIGN 5 0 3 1 1 I live I work I go to I go to I go to in in events in businesses school in Denton Denton Denton in Denton Denton OVERALL 13 12 13 10 11 NEW 5 5 5 5 4 EXPERIENCED 5 3 3 1 3 DESIGN 5 4 5 4 3

While the gender balance was far from ideal, I did balance gender representation at the new and experienced participant level. Unfortunately, after three months of outreach, no female design participants participated in the study. Outreach to all participants of the wiki across email, Facebook recruitment, and campus solicitation resulted in no responses from a female who fit the design participant category. While it is possible this population does exist in the LocalWiki Denton community, it never self-identified during outreach.

5.2 Refining the Results

In all 15 surveys were used and 15 of nineteen usability tests were used. Five usability tests were discarded for three separate reasons. This section will briefly overview those reasons.

90 Texas Tech University, Michael R. Trice, August 2019

Two usability tests were completed and discarded because of the participants’ excessive familiarity with the CMS. In the beginning of this study, I believed designers would be the most difficult demographic to identify and test. However, the outreach methods used (Facebook groups, peer contacts, and university outreach as discussed in Chapter 3 and 4) resulted in an abundance of design participants. Thus, the usability tests of the LocalWiki creators were not needed for the final data set.

Since I tested these participants early on in the process, I did have their data and the tests were valid, but I opted to use Denton natives who fit the design participant demographic instead, as these native participants seemed better suited to capture a local perspective of the LocalWiki Denton system as opposed to the system creators who resided outside of the community. This is a marked inversion of the interviews, where the designer feedback gave some priority to the creators’ discussion of the system.

Three experienced participant tests were rejected for different reasons. Two tests were rejected for separate complications arising from remote testing. One experienced participant performed a test using Windows Remote Desktop prior to the move to Google Hangout Remote Desktop. This test was tossed to ensure ease of validation for all remote access tests even though the literature for synchronous remote testing equating to in person results is highly favorable (Andreasen, Nielsen, Schroder,

& Stage, 2007; Tullis et al., 2002). Still, since validating one test using a different remote access technology was problematic, I opted to simply use a new test that

91 Texas Tech University, Michael R. Trice, August 2019 universalized Google Hangout for remote testing rather than attempt to validate a one- off use of Windows Remote Desktop.

The other experienced participant test discarded during remote testing was due to poor performance of Google Hangout Remote Desktop during the testing session.

For reasons that could not be completely identified, the program lagged significantly during that study and the usability test was abandoned after only two tasks.

Finally, one experienced participant test was tossed because the fake page for task 6 had not been reset properly prior to the test. Since the page contained information different than what everyone else saw on the page, I discarded the test.

The reason for this decision will become clearer in the qualitative analysis, but the core of the matter is that the participant did not have an opportunity to choose the same forms of expression as other participants when interpreting ‘vandalized’ content.

5.3 Defining the Tasks

This section outlines the six tasks arranged from the survey data. It is sectioned off into a description for each task along with a picture of the central work area. While the tasks followed the methodology outlined previously by following the format of two tasks for new participants, two tasks for experienced participants, and two tasks for design participants, most tasks are informed by survey results from at least two groups. For example, how new participants and design participants differed on observations about collaboration and trolling informed tasks at both levels. Thus, the responses for each group were not put in absolute silos but evaluated as part of an overall discourse of participants segmented by experience levels.

92 Texas Tech University, Michael R. Trice, August 2019

5.3.1 Task One: Find the Hours for Open Mic Night at the Garage [New] The opening task for the usability test examines the ability to find a specific event in the LocalWiki Denton (Figure 5.1). However, it is a task complicated because it asks for information that is incomplete in the wiki. The Garage page states the open mic night is Monday but does not give a specific time. This leaves room for participants to make some choices about how to use limited information in the site. Do they accept the limited information or seek additional details outside the wiki?

Figure 5.1. Screenshot of Task 1. The highlighted text reads: “Open Mic Monday every week.”

5.3.2 Task Two: Change the Date Found on The Garage Page [Experienced] This task (Figure 5.2) embodies basic editing of the wiki. The participant is asked to edit The Garage page by changing the generic date found on the page. The table for the page has the default entry of “Date, i.e. 1st January, 1900.” The body text of the

93 Texas Tech University, Michael R. Trice, August 2019 page also provides a rough year for the last time the bar changed ownership, though the task does not ask participants to be accurate.

Figure 5.2. Screenshot of Task 2. The highlighted text indicates the table entry for date founded.

5.3.3 Task Three: Add a Link from The Garage Page to any Other Page [Experienced] The task (Figure 5.3) asks participants to create a hyperlink on The Garage page. The task does not specify how the link should be created and the wiki has multiple options.

The participants experimented quite a bite with the task as I discuss in Chapter 6.

However, it also proved one of the most universally difficult to manage due largely to the UI of the editing tool.

94 Texas Tech University, Michael R. Trice, August 2019

Figure 5.3. Screenshot of Task 3. The link button can be seen as the first icon to the right of the styles dropdown menu.

5.3.4 Task Four: Find the Businesses on The Square [New] This is a follow up task to find a category of pages based upon a shared location

(Figure 5.4). Two pages contain this information and so the task also allowed a way to measure how content duplication might shape participant navigation in the wiki.

Beyond the two content pages that held this information, the wiki map system could be used to complete this task.

95 Texas Tech University, Michael R. Trice, August 2019

Figure 5.4. Screenshot of Task 4. The Square page is shown with List of Businesses header highlighted.

5.3.5 Task Five: Add a New Page for a Location to the Wiki (Can be a Fake Location) [Design] This task asks participants to add a new page to the wiki (Figure 5.5). Adding a page can be a simple task or a complicated task based on what a participant chooses to add.

The basic process can be completed in a matter of seconds. More than whether participants completed the task, the question became how they completed the task. Did they add templates? Did they edit the table? Did they document the creation in comment history? This allowed the task to truly represent a design level view by asking how many elements were incorporated into the page creation process by each participant.

96 Texas Tech University, Michael R. Trice, August 2019

Figure 5.5. Screenshot of Task 5. The figure shows the template options when creating a new page.

5.3.6 Task Six: Address Any Concerns You Have with the Cranky Goose Page [Design] This task involved a fake page seeded into the wiki. The Cranky Goose (Figure 5.6) offered a description of a local business with poor grammar and obvious bias. The hostility of the post was intended to trigger concerns about trolling and vandalism.

Rather than explain what concerns a participant should have, the tasks allowed the participants to analyze the page and make what corrections they saw fit.

97 Texas Tech University, Michael R. Trice, August 2019

Figure 5.6. Screenshot of Task 6. The highlighted text reads: “Don’t go hear just awful.”

5.4 Performance Overview by Participant Groups

Now that I have defined the tasks, this section outlines the quantitative performance of the four groups with regards to individual task performance, SUS scores, and post-task surveys. A qualitative analysis follows after this section. Again, the four groups evaluated include overall performance, new participant performance, experienced participant performance, and design participant performance. Data from all 6 tasks is included below.

5.4.1 Task Performance

The quantitative task performance for each of the six tasks involved time-on-task, success rate, maximum time between input, and total input (mouse clicks). This section breaks down performance into the averages for all four groups across tasks.

98 Texas Tech University, Michael R. Trice, August 2019

Individual participants will be examined more in the analysis section and individual results for all metrics are available in Appendix A. Standard Deviation and significance will also be explained in detail in Chapter 6.

Time-on-task (Table 5.3) offers an overview of relative ease for each of the tasks. That said, as I will discuss in Chapter 6, confusion versus exploration can be tricky to decipher. A task can become less “easy” because a participant opts to do more or explore the options available to the participant more. The results below can show either growing confusion or growing exploration or performing more detailed upkeep when time-on-task spikes. The options also may not be mutually exclusive. As with most of the results in this section, it is important to view them as pieces of the puzzle but not standalone answers. Note that Time-on-Task, a simplified version of Failure

Rates, and some of the qualitative issues of Task 1 and Task 6 were discussed in an early publication (Trice, 2016).

99 Texas Tech University, Michael R. Trice, August 2019

Table 5 .3. Time-on-Task in Minutes for All Classifications. Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 D1 0.75 1.2 1.11 1.8 2.46 2.67 D2 0.47 0.38 2 0.34 4.07 1.67 D3 0.91 0.3 0.47 0.68 2.77 2.65 D4 2.48 4.1 2.87 1.01 1.29 2.42 D5 0.55 1.26 2.12 0.51 1.11 1.73 Average 1.032 1.45 1.71 0.87 2.34 2.23 E1 0.75 0.41 0.53 0.49 0.68 2.62 E2 0.66 2.38 2.81 1.18 9.71 2.18 E3 0.9 1.15 6.24 0.65 3.34 2.53 E4 1.2 1.81 2.31 2.99 2.36 4.48 E5 0.85 1.56 1.81 0.94 1.67 3.76 Average 0.87 1.46 2.74 1.25 3.55 3.11 N1 0.96 1.64 1.52 0.67 0.75 2.04 N2 4.1 5.76 11.52 0.66 10.21 2.81 N3 1.6 1.12 0.56 0.79 0.98 2.2 N4 1.04 1.14 0.75 0.51 0.66 1.03 N5 1.03 1.13 1.42 0.64 3.24 0.96 Average 1.75 2.16 3.15 0.65 3.17 1.81 Minimum 0.47 0.3 0.47 0.34 0.66 0.96 Maximum 4.1 6.09 11.52 2.99 10.21 7.81 Mean 1.22 1.71 2.54 0.92 3.02 2.72 Standard 0.94 1.53 2.87 0.67 3.02 1.67 Dev.

While more time will be spent in Chapter 6 analyzing this information, note from what was discussed in task descriptions that the more open-ended Tasks 4-6 provide more complicated results, especially regarding experienced and design participants often taking more time than new participants. Most groups average less than two minutes per task across the tasks with new participants taking more time on

100 Texas Tech University, Michael R. Trice, August 2019 average. While the sample size and variance in participant performance within groups makes statistical significance unobtainable, the descriptive comparisons of time-on- task are broadly in-line with what should be expected as experience in a platform increases.

Table 5.4 demonstrates the number of task failures across groups and tasks.

Task failure was an infrequent issue in this study outside of Task 1, though task rebounding was more common (potential failure during the task that was corrected after system feedback indicated error—marked as completed with difficulty). Errors were also tracked and will be examined more closely in the workflow descriptions.

Table 5.4 . Rates of Task Failure, Success, and Success with Difficulty. T1 T2 T3 T4 T5 T6 N1 S S F S S F N2 S S- S- S S- S N3 F S F S S F N4 F S S- S S F N5 S S S- S S- S D1 F S S S- S S D2 F S S- S S S D3 F S S S F S D4 F S S S S S D5 F S S- S S S E1 F S S S S S E2 F S- S S S- S E3 F S F S S S E4 F S S S S S E5 F S S S S S

Next, Figure 5.7 shows maximum wait time between inputs during a task. This information in part helped identify which tasks required increased reflection on the

101 Texas Tech University, Michael R. Trice, August 2019 part of the participants by comparing the longest pause during each task for each participant. This metric might begin to point towards a trend of more reflective action on the part of experienced participants in later tasks. The reason for choosing max time was to highlight prolonged indecision periods an average of delays during the task.

Max Time between Inputs (Seconds)

60

50

40

30

20

10

0 Task 1 Task 2 Task 3 Task 4 Task 5 Task 6

Overall New Experienced Design

Figure 5.7. Max Time between Inputs. This table shows the average maximum wait time per task.

Finally, Figure 5.8 shows total mouse input for each task. This is a key comparative tool for both movement and time-on-task. Low input and high movement can suggest inaccurate searching, which can be checked by video performance.

Whereas high mouse input and high time-on-task can suggest higher levels of productivity in expanding task progress or searching. These metrics are not certainties but will be part of the exploration in Chapter 6. The figure also indicates the

102 Texas Tech University, Michael R. Trice, August 2019 importance of standard deviation explored in Chapter 6 when defining group characteristics.

Total Mouse Clicks

60

50

40

30

20

10

0 Task 1 Task 2 Task 3 Task 4 Task 5 Task 6

Overall New Experienced Design New+

Figure 5.8. Total Mouse Clicks. This figure shows the average total mouse clicks across tasks.

Figure 5.8 has two new participant columns because of an extreme outlier in New

Participant 2. Removing the participant from statistics has a dramatic effect upon the group average. While New Participant 2 underperformed dramatically throughout, the difference here was substantial due to a very specific strategy choice of the participant discussed later. This strategy choice (using the wiki Participant Guide) complicates matters because it is far from a “wrong” choice for a new participant. That said, I do look at the effect of New Participant 2 on Standard Deviation in Chapter 6. For now, it is sufficient that the New bar shows the average without the outlier and the New+ bar includes the outlier to understand the basic issues in discusses group averages versus individual performance.

103 Texas Tech University, Michael R. Trice, August 2019

5.4.2 SUS Scores SUS scores were used to evaluate individual and group satisfaction with the system as a whole. The overall SUS score for each participant (Figure 5.9) is provided below.

Once again, New Participant 2 represents a strong outlier, though the system scores well overall. The specific results are recorded below in Table 5.5.

SUS Score 120

100

80

60

40

20

0

New 2 Exp 2 New 3 Exp 3 New 1 Exp 1 New 5 Exp 5 New 4 Exp 4 Design 2 Design 1Design 3 Design 4 Design 5

Figure 5.9. SUS Score. This graph charts SUS scores for all participants.

104 Texas Tech University, Michael R. Trice, August 2019

Table 5.5. SUS Scores for each participant. Participant SUS Score Design 1 85 Design 2 80 Design 3 85 Design 4 90 Design 5 92.5 Experience 1 80 Experience 2 67.5 Experience 3 75 Experience 4 100 Experience 5 90 New 1 77.5 New 2 30 New 3 70 New 4 95 New 5 87.5

Overall, the SUS scores reflect slightly lower satisfaction within the new participants, general (though varied) satisfaction among the experienced participants, and high satisfaction among the design participants. The trend closely follows experience levels, the range of variation in both new and experienced participants is worth noting.

5.4.3 Post-task Surveys The post-task surveys for each task involved four questions:

• Q1: How easy was it to complete this task? (1 Very Difficult, 2 Difficult, 3

Average, 4 Easy, 5 Very Easy)

• Q2: How much easier would this task have been if you had more experience?

(1 Not at All, 2 Slightly Easier, 3 Somewhat Easier, 4 Easier, 5 Much Easier)

• Q3: How confident are you that you could complete this task at a later date? (1

Very Doubtful, 2 Doubtful, 3 Average, 4 Confident, 5 Very Confident,

105 Texas Tech University, Michael R. Trice, August 2019

• Q4: How satisfied are you with the site after this task? (1 Very Dissatisfied, 2

Dissatisfied, 3 Average, 4 Satisfied, 5 Very Satisfied)

Task 1 results (Figure 5.10) are primarily interesting because of the high failure rates and high satisfaction scores in experienced and design participants. News participants seemed more sensitive to incomplete information in this task.

Survey Results for Task 1

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4

Overall New Experienced Design

Figure 5.10. Survey Results for Task 1. This displays average survey results for finding open mic night hours.

Task 2 results (Figure 5.11) show a bit more variance from Task 1 in participant reflection, especially with regards to anticipated improvement in the future (Q2).

106 Texas Tech University, Michael R. Trice, August 2019

Survey Results for Task 2

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4

Overall New Experienced Design

Figure 5.11. Survey Results for Task 2. This displays average survey results for editing a table.

Task 3 results (Figure 5.12) show further drops in participant confidence, and a matching rise in most groups’ expectation of better future results. Design participants lag in expecting better results, however.

Survey Results for Task 3

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4

Overall New Experienced Design

Figure 5.12. Survey Results for Task 3. This displays average survey results for adding a link.

107 Texas Tech University, Michael R. Trice, August 2019

Task 4 results (Figure 5.13) show the strongest overall results in satisfaction, possibly indicating strong learning trends from Task 1 or increased comfort due to complete information.

Survey Results for Task 4

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4

Overall New Experienced Design

Figure 5.13. Survey Results for Task 4. This displays average survey results for finding businesses on the Square.

Task 5 results (Figure 5.14) shows mixed results from a complicated task, but one where everyone managed to succeed. The relative dip in repeatability (Q3) compared to other tasks is thus a bit surprising.

108 Texas Tech University, Michael R. Trice, August 2019

Survey Results for Task 5

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4

Overall New Experienced Design

Figure 5.14. Survey Results for Task 5. This displays average survey results for creating a new page.

Task 6 results (Figure 5.15) shows an extremely wide gap between new participant satisfaction and that of the other two groups. New participants articulated a distinct discomfort with their performance on this task (Q4).

109 Texas Tech University, Michael R. Trice, August 2019

Survey Results for Task 6

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4

Overall New Experienced Design

Figure 5.15. Survey Results for Task 6. This displays average survey results for fixing a biased page.

5.5 Performance in Context

In addition to the metrics above, I tracked qualitative data about the participant experience. Qualitative data was partially specialized to each task, but also included mapping routes throughout each task. This section will first cover this qualitative data for each participant. The qualitative data is based on the categories described in the previous chapter, but the importance of each category will be covered in the following sections as well. This qualitative data will serve to add richer context to the numbers cited above, in particular by highlighting how participant choice in navigating the wiki layout affected quantitative performance versus issues with the user interface.

5.5.1 Qualitative Metrics for Task One

As Task One was a straightforward hunting exercise, I tracked four key pieces of information in observing the participants:

110 Texas Tech University, Michael R. Trice, August 2019

• How did they seek the information?

• If they used the search bar to find information, then what searches did they

enter?

• Did they recognize that the information in the wiki was incomplete?

• What was their page path through the wiki?

The first question is straightforward, all participants relied upon the search bar to find the requested information. However, searches did vary slightly. As Table 10 indicates new participants were more likely to attempt more than one search, though not exclusively so. It is also worth noting that all participants but one effectively locked on using the bar’s name as a key search term during this task.

Table 5.6. Search terms used in Task One. Participant Search Terms(s) D1 the garage D2 the garage D3 the garage D4 the garage, Open Mic D5 the garage E1 Garage E2 the garage E3 the garage E4 the garage E5 the garage N1 The garage fry N2 garage fry street, garage on fry, garage on fry street N3 open mic The Garage Fry street, The Garage hours Denton Fry Street N4 open microphone night, the garage N5 garage fry street

111 Texas Tech University, Michael R. Trice, August 2019

Once participants settled on The Garage page, which mostly occurred in less than a minute as related in Table 5.3, a distinctive pattern emerged in those who accepted the information in the wiki as complete and those who did not. As Table 5.7 shows below, only 5 participants of 15 noticed that the wiki had incomplete information. Of those 5 participants, 4 participants were new participants. The finding was one of the most striking disparities in the study.

Table 5.7. Participants who noted missing information in Task One. Participant Recognized Information as Incomplete? D1 N D2 N D3 N D4 Y D5 N E1 N E2 N E3 N E4 N E5 N N1 Y N2 Y N3 Y N4 N N5 Y

However, recognizing that the information was incomplete, did not necessarily lead to task success. Only 3 of the 5 participants who noted the information was incomplete went off-site to gather the new information. All of these participants were new participants.

In Table 5.8, the work flow for Task One remains relative stable for

Experienced and Design participants moving from the Main Page to a Search function

112 Texas Tech University, Michael R. Trice, August 2019

to the Results page to the Garage wiki page. While multiple participants erred (E) in using the search function, only one participant failed to make it to The Garage page before ending the task

Table 5.8. Participant Workflow for Task One. Participant Path D1 Main>Search Bar>Results>Garage Link>Garage D2 Main>Search Bar>Results>Garage Link>Garage D3 Main>Search Bar>Results>Garage Link>Garage Main>Search Bar>Results>Garage Link>Garage>(E)Search (Open D4 Mic)> Results> Denton Pride>Results D5 Main>Search Bar>Results>Garage Link>Garage E1 Main>Search Bar>Results>Garage Link>Garage E2 Main>Search Bar>Results>Garage Link>Garage E3 Main>Search Bar>Results>Garage Link>Garage E4 Main>Search Bar>Results>Garage Link>Garage E5 Main>Search Bar>Results>Garage Link>Garage Main>Search Bar>Results>Garage Link>Garage>Garage N1 Website>Open Mic Night Main>Search>Results>Search>Results>Search>Results>Main Page>(E)Fry Street>The Garage> N2 The Garage Website>Open Mic Page> N3 Main>Search Bar>Results>Garage Link>Garage>(E)Search>Results N4 Main>Search Bar>Results>(E)Main>Results>Garage Link>Garage N5 Main>Search Bar>Results>Garage>Garage Website>Open Mic Night

5.5.2 Qualitative Metrics for Task Two In Task Two, I examine two core elements of the task. What errors were committed?

And what workflow did the participants use. In this case, participants were trying to update perform a basic edit by changing a time on the website and saving the change.

As Table 5.9 shows errors were common to most participants, but the errors were relatively minor. Many participants assumed that the save button was at the top of the input area like the other edit icons rather than at the bottom. Additionally, a number of

113 Texas Tech University, Michael R. Trice, August 2019 experienced participants did not save the changes made, though in recollective recall this was universally explained as not wanting to make a permanent, incorrect change to the live version of the wiki.

Table 5.9. Errors in Task Two. Errors D1 None D2 Did not save D3 Did not save D4 None D5 Searched top for save E1 Did not save E2 None E3 None E4 Went to top first to save E5 Went to top first to save N1 Went to top first to save N2 Went to top first to save N3 Went to top first to save, clicked edit, then to bottom N4 Went to top first to save N5 None

Table 5.10 shows the workflow used by all participants in Task Two. Two participants

(E2 and N2) took circuitous routes through the process and D4 wanted to confirm the change on businesses actual website. That said, across all groups, the participants presented a direct and unified workflow supported by a high task success rate and strong confidence in their ability to edit a page.

114 Texas Tech University, Michael R. Trice, August 2019

Table 5.10. Workflow for Task Two. Workflow D1 Edit Button>Edit Date In Table>Comment>Save D2 Edit Button>Edit Date In Table D3 Edit Button>Edit Date In Table Edit Button>New Tab>Google Garage>Checks Garage Page>Googles D4 (Garage Open Mic)>Edit Date In Table>Comment>Save D5 Edit Button>Edit Date In Table>Save E1 Edit Button>Edit Date In Table Edit Map Button>Back Out>Edit Page>Cancel(Delay In Load)>Edit E2 Page>Change Date>Save E3 Edit Button>Edit Date In Table>Save E4 Edit Button>Edit Date In Table>Save E5 Edit Button>Edit Date In Table>Comment>Save N1 Edit Button>Edit Date In Table>Save Garage Website>Main>Create New Page>Main>Bars>The N2 Garage>Edit>Save N3 Edit Button>Edit Date In Table>Save N4 Edit Button>Edit Date In Table>Save N5 Edit Button>Edit Date In Table>Save

Additionally, three participants left comments about their edit in the edit comment section. Two design participants and one experienced participants completed this additional step.

5.5.3 Qualitative Metrics for Task Three In Task Three the participants were asked to make a more complicated edit by adding a link to another page. As show in Table 5.11, this resulted in a combination of errors around both adding the link and saving the changes. This time recollective recall showed participants had forgotten to save the changes as they expressed no concerns about adding a new (and proper) link to the page.

115 Texas Tech University, Michael R. Trice, August 2019

Table 5.11. Errors in Task Three. Error: Adding Link Error: Save Menu D1 None None D2 Attached File, Not Link Went To Top To Save D3 None Did Not Save Hovers Over 5 Icons Before Finding None D4 Link D5 None None E1 Tries 3 Icons Did Not Save E2 None None Scans Links, Clicks Wrong Button (3 None E3 Times), Failed To Add Link E4 None Did Not Save E5 None None N1 Failed To Add Link None N2 None None Did Not Save, Realized N3 Failed To Add Link Immediately After N4 Scans Several Icons To Find Link Did Not Save N5 Scans Icons None

As shown earlier, this task also had a higher failure rate than Task Three with three participants failing to add links whether or not they saved after completing the task.

Table 5.12 takes a look at the workflows for this task.

116 Texas Tech University, Michael R. Trice, August 2019

Table 5.12. Workflow for Task Three. Workflow D1 Edit>Highlight>Add Link Button>Comment>Save Edit>Page Source>Copy And Paste Link In Page Source>Review In Edit D2 Mode>Add With Link Button>Save D3 Edit>Retyped In Existing URL> Search (Bars In Denton)>Results>Bars Page>Search(Mad World Records)>Results>Mad World Records>Edit Mad World>Highlighted D4 Words>Add Link Button (The Garage)>Save CTRL+Click Lucy Lou Page For New Tab>Copy Lucy Lou URL>Edit>Link Button>Insert URL>Cancel>Edit>Add New Line>Highlight New D5 Line>Link Button>Paste URL>Save>Follow Link E1 Edit>Link Button>Types Link In Field (Http) Edit>Typed Url>Deleted>Typed Url>Link Button(Url)>Deleted Typed E2 Url>Save Edit>Type Text>Right Mouse (No Option)> Delete Text>Examine Other Links> Type URL>Delete>Link Button>Embed Media Button> E3 Embeds Url As Media> Saves>(Fail) E4 Edit>Type URL>Highlight>Link Button>Ext URL Edit>Edit Existing Link>Type Text> Highlight>Link E5 Button>Comment>Save N1 Edit> Copy Link>Paste Link>Copyandpastefilter>Save(Fail) Garage>Garage Website>Edit>Link Button>Cancel>Edit>Highlight Cool Beans>Button>Cancel>Back Out Of Edit>Garage Website>Contact Page>Garage Wiki Page>Edit>Link Button> Paste Contact Page>Cancel And Leave Page>Edit>Click On Seed Link> Info>Edit>Main Page>Wiki Guide>Main>Bars>The Garage>Edit>Highlight>Button>Test>Save>Clciks Link>Main Garage N2 Web Page> N3 Edit>Type URL In Table N4 Edit>Highlight>Link Button N5 Edit>Type URL>Save>Edit>Highlight And Button>Save

As shown above, the workflows were confused across many of the participants and across all groups. While the Design group all succeeded, three created overly extensive workflows. Similar issues happened in other groups with the addition of task

117 Texas Tech University, Michael R. Trice, August 2019 failures by pasting URLs rather than embedding them or by embedding files rather than links. Across the board, participants struggled to identify the link icon.

5.5.4 Qualitative Metrics for Task Four Task Four is largely a repeat of Task One except that participants are given a generic target (find businesses on The Square) rather than asked to find a specific page.

Multiple pages contain the list of businesses, but I wanted to see who would use the search versus the interactive map and how they would locate the businesses. All participants completed the tasks and relatively quickly (see Table 5.3 and 5.4).

However, a few participants took much longer than the average.

Table 5.13. Workflow for Task Four. Workflow Clicks On Map>Attempts To Click Known Location Without D1 Symbols>Search Bar>Results>The Square>Stops At List Of Businesses D2 Search Bar>Results>The Square>Stops At List Of Businesses D3 Search Bar>Results>The Square>Stops At List Of Businesses D4 Search Bar>Results>The Square>Stops At List Of Businesses D5 Search Bar>Results>The Square>Stops At List Of Businesses E1 Search Bar>Results>The Square>Stops At List Of Businesses E2 Search Bar>Results>Sides Of The Square>Stops At List Of Businesses E3 Search Bar>Results>The Square>Stops At List Of Businesses Search Bar>Results>The Square>Shopping>Back To The E4 Square>Stops At List Of Businesses E5 Search Bar>Results>The Square>Stops At List Of Businesses N1 Search Bar>Results>The Square>Stops At List Of Businesses N2 Main>Click>Square>End Search Bar>Results>Square>Shopping>Back Button>Square>Sides Of N3 Square N4 Main>Search (The Square)>Results>The Square>Ends At Businesses N5 Main>Search>Results>The Square>Sides Of The Square

118 Texas Tech University, Michael R. Trice, August 2019

One Design participant attempted to use the map but failed to successfully navigate from the map to The Square, eventually falling back upon the Search option.

Participant E4 used a successful search but went to a general Shopping page first before trying a second search to find a correct page. Only one participant (N2) noticed that The Square linked from the Main Page and followed it directly without needing to perform a search.

5.5.5 Qualitative Metrics for Task Five Task Five asked participants to create a new page. The instructions left participants with a fair amount of room to experiment if they chose. Table 5.14 demonstrates which features participants used beyond simply creating page. Four used a specific page template, ten added body text, one left a page comment, three added a table, and one added a map location.

119 Texas Tech University, Michael R. Trice, August 2019

Table 5.14. Features Used in Task Five.

Features Edited

Offered Edited Template Template Body Tags Commented Table Map N1 X N2 X N3 N4 X N5 X E1 X E2 X X X E3 X X X X E4 X E5 X X D1 X X X X D2 X X X X D3 D4 X X D5 X

The workflow in Table 5.15 helps explain why some participants were offered a template and others were not. If a search turned up a null result for a page, then the option to create a page with a template was offered. If participants attempted to create a page from a new link, a template was not offered. While the task resulted in some of the most extensive trial and error of the study, only one participant failed to create a new page (D3), and that participant simply linked two existing pages together.

120 Texas Tech University, Michael R. Trice, August 2019

Table 5.15. Workflow of Task 5. Workflow D1 Search (Happy Place)>Create Page Button>Place Template>Edits Page (Address, Contact, Website, Body)>Save Changes D2 Empty Create Page Click/Search>Types Entry And Clicks Button>Creates Page(No Template Offered)> Backs Out To The Square>Goes To The Banter>Edit The Banter> Backs Out> Searches Fortress> Create Page> Clicks Place Template>Edits (Address, Website, Body)>Save>Edit Page> Change Page Name>Save D3 Edit The Square>Open New Tab> Go To Main Page>Search (Subway)>Results>Go Back To First Tab>Add Subway To Square Businesses>Highlight Add Link(Subway)>Go To Existing Subway Page>Edit Subway Page>Clicked Add Link Without Text (Error)>Highlighted Sonic Addition On Subway Page To Add Link (Sonic)> Save D4 Empty Create Page Click/Search>Types Entry And Clicks Button>Creates Page(No Template Offered)>Edits Body>Comment>Save D5 Empty Create Page Click/Search>Types Entry And Clicks Button>Creates Page(No Template Offered)>Edits Body>Comment>Save>Search(New Page)>Goes To New Page E1 Empty Create Page Click/Search>Types Entry And Clicks Button>Creates Page(No Template Offered)>Edits Body E2 Search (Tree Climbing)>Create Page Button>Create With No Template>Edits Page (Address, Contact, Website, Body)>Save Changes>Create Map(Slow Load)>Goes To Main Map>Back To Edit Map>Map> E3 Empty Create Page Click/Search>Searches (Terrill Hall)>Results>Create Page(Terrill Hall)>Chooses Place Template>Edits Body, Location, Website>Save E4 Empty Create Page Click/Search>Types Entry And Clicks Button>Creates Page(No Template Offered)>Edits Body>Save>Search>Results>Moose Burger E5 Main Page> Search Westeros> Create Westeros> Place Template>Save N1 Empty Create Page Click/Search>Types Entry And Clicks Button>Creates Page(No Template Offered)>Edits Body>Save N2 The Square>Edit>Main>Wikiguide>Main>Edit>Cancel>Search>Denton Ballet Academy>Main>Recurring Events>Weekly Events>Main>Edit>Cancel>Denton>Create Link For Federal Civil Defense Administration>Save>Create New Page>Federal Civil Defense Administration>Save N3 Empty Create>Create Pet Palace>Save N4 Main>Search>Results>Create Page>End

121 Texas Tech University, Michael R. Trice, August 2019

N5 Sides>Search(Square>Results>Square>Tag(Cancel)>Edit>Info>Search>Resu lts>Sides>Edit>Back To The Square>Search/Create Page>Create New Page>Sam’s>Edit Body>Save

5.5.6 Qualitative Metrics for Task Six Routinely new participants would make one type of edit on Task Six and then end the task (Table 5.16). This stands in stark contrast to the other two groups where nine participants made at least two types of edits and four participants made three or more types of edits. It is also noteworthy that the most common new participant edit was to add a comment. New participant recollective recalls also stated they would like a comment section for voicing opinions as discussed above. New participants expressed no concerns about the difficulty of editing, but simply chose not to make edits themselves.

Experienced and design participants had no such issue, routinely fixing grammar, removing inappropriate content, and adding new content. While two experienced participants also expressed interest in moderator style comments, only one allowed this to stop the participant from making additional changes. Four of the ten experienced and design participants referred to dealing with the opinion expressed on the page as the main concern in Task Six during recollective recall, though some believed the issue was a lack of specificity about the bad service and others felt it was wrong due to bias.

122 Texas Tech University, Michael R. Trice, August 2019

Table 5.16. Features Edited During Task Six. Fixed Fixed Deleted Added Added Added Grammar Table Content Comment Content Tag N1 X N2 X X N3 X N4 X N5 X E1 X X X X E2 X X E3 X E4 X X X X E5 X X X D1 X X X X D2 X X D3 X X D4 X X X D5 X X

As Table 5.17 demonstrates, the workflow for Task Six was fairly consistent across groups. The only significant deviation was in whether participants made use of the comment function. Though, again, the use of the comment function was not for its intended purpose of making notes about changes but to call attention to issues about the page. Participants who used comments created an imagine moderator that would deal with the issues in the page.

123 Texas Tech University, Michael R. Trice, August 2019

Table 5.17. Workflow for Task Six. Workflow D1 Search>Results>Goose>Edit (Hours, Payment, Body)>Comments>Save D2 Search>Results>Goose>Edit (Hours, Website, Payment, Body)>Save D3 Search>Results>Goose>New Tab>Google Cranky Goose> Returns To Tab>Checks Info History>Edit (Body)>Save D4 Search>Results>Goose>Edit (Hours, Payment, Body)>Comments>Save D5 Search>Results>Goose>Edit (Hours, Payment, Body)>Save E1 Search>Results>Goose>Edit (Hours, Phone, Website, Payment, Body)>Save E2 Map>Results (Cranky Goose)>Cranky Goose>Edit(Hours, Grammar, Body) E3 Search>Results>Cranky Goose>Started To Add Tags(Stopped)>Edit Body>Save E4 Search>Results>Cranky Goose>Edit (Hours,Payment, Body)>Save E5 Search>Results>Cranky Goose>Edit (Hours, Established, Payment, Body)>Edit Map> Cancel>Save N1 Search>Results>Cranky Goose>Info>Edit>Cranky Goose>Edit (No Changes)>Comment>Save N2 Back Create Page>Back Denton>Back To Main>Search>Results>Cranky Goose>Edit>Comment>Save>Info>Edit(Website)>Save N3 Search>Results>Cranky Goose>>Tag (Cancel)>Info>Edit>Cranky Goose>Edit (No Changes)>Comment>Save N4 Search>Results>Cranky Goose>Add Tag> N5 Search>Results>Cranky Goose>Edit(Body)>Save

The next chapter takes the elements described here and analyzes what they reveal about literacy and naturalization within the LocalWiki Denton. Specifically, the chapter explores how to map the perceived platform from the participant interviews with the experiences described in the usability tests. This process includes examining the functions and features used in the usability tests, the errors made by participants, and the conflicts between workflows and participant descriptions of the platform.

124 Texas Tech University, Michael R. Trice, August 2019

CHAPTER 6

AN ANALYSIS OF THE DENTON LOCALWIKI In this chapter, I review both how participant groups defined their networks based upon the interviews and how those participant groups redefined the network when pursing specific activities within the DentonWiki. In doing so, I seek to highlight both how groups of participants perceive their networks and how movement around specific activities encourages participants to rewrite these networks as they move from anticipation to action. I then compare the perceived networks with the experienced networks to determine differences between tasks and groups to better illustrate a full ecology of participant activity and actors across participants and the platform.

In comparing these areas, I highlight accountability within the platform by acknowledging agents within the network that lead to participant success.

Additionally, I seek to highlight unaccountable elements that lead to failure, as well as instability agents perceived by participants that do not reflect the platform as designed.

The comparison between accountable and unaccountable agents will provide a means to discuss how effectively different classes of participants oscillate between social and computational goals within the system.

6.1 Review of Participant Classes

The first step in my analysis is to review how the three participant classes described their perceived network across the interviews for each class. To do this, I used a concept coding process to first highlight the agents and activities mentioned within each group. First, I divided the responses into six high level categories based upon the

125 Texas Tech University, Michael R. Trice, August 2019 questions: purpose, technical features, social features, content, problems, and benefits.

I then looked for shared concepts along the six categories taking the concepts shared by at least three respondents as important to the perceived activity of the group. This process allowed for a deeper dive into the elements described in Chapter 4.

6.1.1 New Participants As explained in Chapter 4, new participants largely listed broad concerns that depicted the activities and concerns of consumers of information. They matched the idea of lurkers and readers in many ways by focusing on how one found or searched for information as central actions as opposed to content creation and management concerns. Table 6.1 overviews the frequency of this responses with the new participant group.

New Participants viewed the purpose of a local wiki as place to provide consolidated, searchable local information. The idea of information consumption dominated their understanding of the purpose with concepts of “finding” local businesses, events, and locations. The actions that drove their concern were primarily about navigation, searching and finding appeared across the responses. Participants described the site as a searchable concentration of this local information.

When it came to technical functionality, search again dominated concerns with new participants focusing upon navigation and structure of information via hyperlinks and article formatting. Notably, moderation also showed up frequently within expectations of functionality by new participants. It occurred primarily as outside

126 Texas Tech University, Michael R. Trice, August 2019 agent. While one interviewee referred to the ability “to flag misinformation for review”, three others referred to “moderators” as a role within the wiki network.

Social features for New Participants did invoke a variation from the purpose of the site. Where the purpose of the site focused on consumption and navigation, New

Participants spoke frequently about the peer-reviewed and collaborative nature of the information on a local wiki. They did not supply explicit actors around these concepts, but they did articulate the actions of collaboration and review as central to the nature of the information on the wiki. Terms used to describe this process included “peer review”, “collaboration, and even “fact-checking”. No specific methods for this process were discussed but the terms themselves did arise.

Much like the description of purpose, content was described in two core ways: accurate information and local information. New Participants focused heavily upon the idea that information should be accurate with the phrase “accurate information” and

“factual information” as their primary modifiers for content in the wiki. Types of content were fairly varied, though all agreed it should be local. Not surprisingly, these participants saw the wiki content in terms of local information broadly with little shared specificity for the types of information they might encounter other than the local modifier. While various respondents mention general concepts like events, places, traditions, and things, there was no larger overlap across new participants in their responses.

The main problems New Participants discussed were also based upon readerships, focusing on concerns about malicious information (jokes articles, trolls,

127 Texas Tech University, Michael R. Trice, August 2019 negative reviews). Those not concerned with embellishment still focused on the quality of information (inaccuracies and embellishment). Participants did not directly describe any malicious actors, but they implied the existence of actors who might create false information intentionally more than unintentionally.

Benefit largely mirrored concerns of purpose. Participants highlighted the value of local and specialized information. Two participants also discussed the valued of accurate and vetted information.

128 Texas Tech University, Michael R. Trice, August 2019

Trends N5 N4 N3 N2 N1

Concept Inform, Actions: Search community.” local community. “Searchforreadand articlesyourrelevant to local regardingevents, localand traditions, placesof interest.” searchable“Toprovidea databaseonline of information locationsinteresting not peoplemany knowmay about.” placeseat,to parksshop,and ofthings that nature.”“Add historythingsandthe areathe offers.For example places to “T on.” are going frominformation locals.” “Ioutfoundwhat kindeventsof visiting orbe town movingwillbe totown…to found out “Provideinformat me.” “Iwould them.” search specific information thatare local i “Iguess restaurantslocal businesses.”or severaldifferent webpages.”“Search onforinformation direct,“Toget Purpose o inform peopleinform o livewhoin planor toanvisit area about

s: Local s: Places tistoinform people

Searchforand

factualinformation ionforpeoplenew to

readarticlesyourrelevant to

aboutthatthingsare tolocal Table Table

withouthaving tosearch 6 .

1 town ortown willwho .

CommonNew ConceptsParticipants for

Edit, ModeratorEdit, Search,EaseNavigation,of “relatedarticles”, admin”,“moderationby “advancedoptions”,search bar”,“moderators”“edit”, “ “Easynavigation”, site” “Easeofnavigationof the “can“themoderator”, edit”, website” “Theorganizationtheof “searchspecific information”, misinformationforreview” flag “an ability to “Search”,“easy navigation”, TechnicalFeatures “hyperlinking”

search

Information Reviewed help/advice.” specific communityfor “askingthe “collaboration”, “factchecking” information.” that on feedback“Getting them.”are to local thingsabout that “informpeople information.” “peer FeaturesSocial - reviewed

129 Texas Tech University, Michael R. Trice, August 2019

Trends N5 N4 N3 N2 N1

Places AccurateInformation, information” places“accurateinterest”,of “local knowpeoplemay about.” “interestinglocations notmany “Accurateinformation” thatare “things tolocalthem.” restaura “informationon “direct,factual information”, Content

events, traditions,events, and nts or businesses.” or nts

local Local

InaccurateMaliciousInformation,Actors articles.”‘trolling’ or “Inaccurateinformation well “not moderated”,“Jokearticles” gospeltake as truth”, “site “inaccurate“someinformation”, people N/A accurateconstantly areinformation low.” chancesthe then theof Wikigiving cananybodydirectlyedit page a at will, facts“Embellishedor statistics.”,“If just Problem Table Table

6 . 2 .

CommonNew ConceptsParticipants for II

bombing”

AccurateInformationAccessiblePeople to placesinterest.” of regardingevents, localand traditions, “searchabledatabaseonline informationof visit people“Toinform livewhoin planor to “Accurateinformation” “Finding town”, specialized knowledge”, “Providein them.” to “informpeopleabout thatthingsare local isn’tbeenanecdotaljustposted fluff.” information,ithelps v “Whenseveral peoplereview added Benefit

an area”an

formationforpeoplenew to

alidatethatw

hat’s

130 Texas Tech University, Michael R. Trice, August 2019

6.1.2 Review of Experienced Participants Experienced participants followed many of the trajectories of the new participants (see

Tables 6.3 and 6.4), but they offered more specify within categories and offered a wider range of actions. Essentially, experienced participants discussed specific pages and a slighter wider range of actions specific to the wiki. Expert participants relied upon the verbs of inform and provide, much like new participants, but they expanded to ideas of connection and making more than new participants. Search also fell out of uses as a staple of action for Expert participants. Editing and discovery became more of a focus, and problems moved fully into concerns of lack of content or outdated content rather than malicious actors.

Experienced participants described the wiki as place to provide information and inform local residents, much like new participants. They also made more observations about connection to local residents than new participants. Participants referred to the wiki as serving the community and from the perspective of the people in the community. Additionally, experienced participants were focused upon activities and events as central to wiki’s purpose.

Not surprisingly, experienced participants expressed more knowledge about the specific pages of the Denton LocalWiki. Three participants referred to specific pages in the wiki, while two participants referred explicitly to the Recent Changes pages. Hyperlinks and editing were also referred to be three of the five experienced participants as important technical features. Only one experienced participant highlighted search as key element.

131 Texas Tech University, Michael R. Trice, August 2019

The social features highlighted by the experienced participants was fairly diverse though they revolved primarily around acts of discovery and entertainment.

Two participants stated that articles might be “hilarious” or one uses the wiki to “have fun.” Discovering new places and finding out “about organizations, restaurants, and events” also appeared, as well as the common provide information and inform people about the community. Ideas around sharing culture and community connection appeared as well.

The type of content that experienced participants expected focused around events and organizations primarily. Three of the five respondents listed events, and events were described as “upcoming” or as “things to do”. The nature of the events was a bit vaguer than organizations. Two of the five participants highlighted organizations, but the two that did highlight organizations also include narrower, more specific categories like businesses, clubs, and schools. A third participant who did not specify organizations also included drink and food specials as part of their activities, those two participants implicitly referred to businesses.

Four of the five years concentrated on accuracy of information as a core problem, while one participant commented on the difficulty of people understanding

“what it (the wiki) is”. Outdated information and incorrect/vandalized information were the most common concerns when it came to accuracy. As opposed to new participants, only one experienced participant raised concerns about malicious actors

(vandalism) directly. A second did raise concerns that unrestricted editing might lead

132 Texas Tech University, Michael R. Trice, August 2019 to incorrect information, however. Thus, outdated information and inaccurate information were represented equally among the experienced respondents.

Experienced participants primarily saw the benefit of the wiki as being a source of local information. Phrases like “local interest”, “understanding the culture in town”, and “media source by the people” highlighted a direct connection between the population and the information in the wiki as the primary benefit.

133 Texas Tech University, Michael R. Trice, August 2019

Trends E5 E4 E3 E2 E1

Activities Concepts: People,Community, Events, To Provide,Actions: Connect perspectivepeople”itsof accurate asbias muchpossible and providingan ratherhopefullycity,the than avoiding as “Tomakemediaa sourcetheby people, Texas.Denton toaspecific to “Toprovideinformation of interestlocal organ businesses,events, tothingsanddo, local“Toinform residents/visitors of areathe foritserves.”which should peopleunfamiliarwiththe area. Thewiki “makeaboutinformation areaan toopen community.” their “Toconnect people&info Purpose izations.” outlineevents,places, and historyof

portrayalof

wn,a

s in the exampleinthe s of

the citythefrom the

rm them

about Table Table

6 . 3 Links SpecificWikiEdit, Pages, page.” “Recent“edit”, Changes “hyperlinks” “searches”,“edit”, toexistinginformation pages.” ‘The“edit”,Square’.”, “added ‘FryMadeEasy’, Street’, and “Thepages‘DentonLaws “articles“embedded ”, links” “Recenttab.” Changes “TheTo‘Things Do’link”, TechnicalFeatures .

Common ConceptsParticipantsExpert for I

Explore, AppreciatExplore, “have“improvesarticle”fun”, “provideofinformation local interest” restaurants,events”and outaboutfind “to organizations, a culture in town.” “discovernewplaces”, “understanding the hilarious” community.”,“entries are oftentimes “connectinform peoplethem& about their FeaturesSocial

.

ion (fun ion and

cultural)

134 Texas Tech University, Michael R. Trice, August 2019

Trends E5 E4 E3 E2 E1

LocalContext Events,Organizations,Places, “portrayalcity” theof etc. clubs,organizations, schools, asscene,localsuch “ and organizations.do, “ areathe foritserves.”which and‘places,events, history of specials.”food/drink events,“upcoming activities, Content information relativeinformationto the businesses,events, tothings

Missing/OutdatedInformation was.” wiki] “g “ “vandalization”,“outdatedinformation” “ editing.” incorrectdueunrestriinformation to dateinformation,of“Out or Problem Determining what isDeterminingwhattruly local Lackcontentof Table Table et peopletoetunderstand what it[the

6 . 4

.

Common ConceptsParticipantsExpert for II. ”

possibly

cted

Local city” the “mediasourcethe rather by people, than “ “ a culture in town.” “anexcellent source forunderstandingthe “entriesarehilarious.” oftentimes Benefit provide newinformation Information

information ofinformation local interest ”

135 Texas Tech University, Michael R. Trice, August 2019

6.1.3 Design Participants Design participants (see Table 6.5 and 6.6) offered far more detail and specificity about the nature of the platform in their responses than the other two groups, but they focused less on specific community examples than experienced participants. Design participants were more likely to discuss the collaborative potential and organizational issues of the wiki than specific community uses. They also discussed a wider range of technical features.

Design participants viewed the wiki as a means for community members to share and access what they know. Design participants always connected the purpose of the wiki to the people who created and shared the information. They also referred to the project as a community, either using the term “community” or “people” or discussing how the information was “passed down from one person to another.” One key evolution was that design participants clearly articulated that the purpose of recording and sharing was for the community itself.

Design participants highlighted a range of technical features. Each design participant highlighted editing or an editing feature (such “add”, “update”, “text formatting”) in their responses. Three of the five design participants highlighted the importance of graphical images, which was a new inclusion verses both new and experienced participants. While design participants mentioned “links”, only one mentioned “search” and navigation in general was less of a focus than in the other two groups.

Design participants focused heavily upon collaboration as the core social feature in the wiki. Four of the five mentioned collaboration explicitly, with focuses

136 Texas Tech University, Michael R. Trice, August 2019 ranging from collaborative content to consensus-building to asynchronous participation. In addition to collaboration, four of the five design participants highlighted the importance of adding content to the wiki.

Design participants focused upon a range of topics when it came to content.

Three of the five focused on places to visit and two highlighted businesses specifically. Two design participants commented on the importance of accurate, factual information as well.

Design participants primarily focused on organizational problems, not problems with the wiki itself. Three design participants commented on the difficulty in organizing a reliable contributor base. Abuse of access, power was also a concern of two design participants.

Design participants described two key benefits of a local wiki. First, that the information itself has value, either due to its populist source or due to its potential for further discovery. Secondly, the source of the information as being from the people themselves was seen as a primary benefit.

137 Texas Tech University, Michael R. Trice, August 2019

Trends D5 D4 D3 D2 D1

Concepts: Information,Concepts: Community Add, Actions: Collaborate knowledgecommunity”fortheir thistogether ofon sort collaborative shared insharedagoal “Givespeopleincommunitylocalany a shared community.” whatever communityup, your depends youron uponrelyingoutside Highsources. itendslevel Communitycaninform withoutthemselves outtoaboutfind go their community. inform toacquireswherehangout it so much whatuploading know,they andphotos, being a evolves“Local wiki curiosityafrom for people ownwritetheir history.” aat “I wikilocallook as wayaforpeople to poweredalmostanyone”by shareaccessand knowledgeanabout are“local madewikis incorporate to efforts to torecordedbetend otherof in sorts media” another. to Thekindof informationthat Informationthat is passeddown persononefrom permanentlyprimarilythat oral is information. “primarilytorecord information there Purpose ation itbec ation

purposeand allow them omes aomes resources where people

Table Table

6 .

5 . ar

CommonConc ea

do towork esn’t

Collaboration Editing, Photos, LocalWik contributingknowledgeto the soandprocessprocess,the of particularpage”,“editing is viewingusejust a ‘thefeature peoplemost that down.” pageais lockedunless “Anyonepageanycan edit “ma “links”, “search”,“photos”, “editing”, “administrat “map”,editor”, “add”,“update”,“page “add”,“Graphics”“Edit”, “text “links”, formatting” TechnicalFeature epts for D eptsfor

i project”,“images i or” ps”,“tagging”,

esignParticipants I.

Active, RegularActive, Collaboration andresolution the format,them helpingandwith confl the in projectthrough fixingit, helping neworiginalof content.Theytend tostay certaintend addpoint, not to wholea lot “longprocess.”, “ knowledgehave”they thetaking timeto share only the “connectcommunity”, with someone“Its wheneverwant”they participantscancollaborate wherever and “Thewikianicethingabout isthat areathe about residethey in” and “Build collaborate educateand people collaboration” theirabout location.”,“Group “Theytheywriteabout think is important FeatureSocial consensus

- basedmakingdecision

- term editors,term after a

discussions.”

ict

138 Texas Tech University, Michael R. Trice, August 2019

Trends D5 D4 D3 D2 D1

Accurate,UsefulLocalInformation businesslocalparkaor interesting.”,”organizationlocal or really I popular. thinkthatprojecta can be thinkit“Idon’t community.”your on “whatevercommunitydependsyour folklore”and necessary.”,of “asnarkylot humor Photossimple. arebut niceanottouch “Correctinfo effectivefacts.” moreabout. “placestoorwantvisitthey knowjust thatplace. to small just happenings arethat peculiar businesses,sometimesthat’s that’s events,includesmight sometimes “ Content That might includemightThat landmarks,that good just good it’susefulif and

”, “Content, ”, quick and ” rmation.Plainand

necessarilyhas to be

Table Table Org site,” peopleandeditors arethat goingto follow the creatyou uniqueinteresting materialon there,” “how do is“thehowproblemyou getdo enough “d “anybodycancontributeinformation it.” to “Peopleabusing powers” the dedicatedparticipants.” problems“The biggest isgetting some Problem isagreement”,organize”“hard to 6 anizing Involvementanizing .

6 .

Common Concepts for

e or maintainorevibranta communityof

DesignParticipants II.

sortof

Exploration EmpoweredInformation, deeper in learnandmore.” andinformation sortthenof information “theycome community.” their tofindpeoplego out about becomesresourcesa where acquiresmuch so information it “beingwherehangouta to it toit.”information “anybodycancontribute poweredalmostanyone”by “knowledgeabou of othersorts media’ recordedtendbedoesn’t to in of ‘The kind informa Benefit

forsome bitof andgetthey that t an areaan t

tionthat

fall

139 Texas Tech University, Michael R. Trice, August 2019

6.1.4 Reviewing the Trends I want to take a moment to summarize some of the key trends seen within and across the three groups when considering purpose, social functionality, technical functionality, content, problems, and benefits. I will walk through the six categories and highlight the shared traits and differences seen in the participant groups described above.

For purpose, we see that the information in the wiki is a centered by all three groups, but how they interact with that content varies. New participants see the information as a means to inform others and something to be searched; they only vaguely identified the nature of the local information with no strong trends beyond local places. Experienced participants described the purpose as more closely related to the community members. While experienced participants discussed providing information like new participants, experienced participants took the extra step to consistently connect that use back to local residents—so the wiki exists to provide and make available information to locals. Design participants also connected the purpose to locals, but they centered contribution over reading.

When it came to technical features, new participants focused on navigation and search. Additionally, new participants consistently named moderators as a central feature. Experienced participants also discussed navigation, but they did so in terms of specific pages—recent changes, things to do, Fry Street—as well as links.

Experienced participants also consistently discussed editing and did not discuss moderators. Design participants focused heavily upon editing and contribution,

140 Texas Tech University, Michael R. Trice, August 2019 including text formatting, images, and adding links. Design participants did not discuss navigation as heavily and did not discuss moderation.

New participants primarily focused upon review of information as a social feature. Terms like fact-checking, peer review, and getting feedback dominated their response. Experienced participants had a diverse set of responses that focused upon discovering new information about culture, community, and local interests. They also discussed the wiki as way of connection and enjoyment. Design participants primarily focused upon functional collaboration, moving away from the implied “fun” of the experienced participants and with more of a focus upon contribution than new participants’ concerns about information vetting.

When it came to content, new participants focused upon accuracy and local places as their primary point of interest. Experienced participants did not express the same level of concern about the accuracy of the information; they primarily focused upon the coverage of local events, places, and organizations as the most important content. Design participants focused heavily on the quality of information, but the focused on its utility and interest as well as accuracy. Design participants largely avoided specific examples of content, stating it depended upon the location/community.

New participants were overwhelming focused on accuracy and malicious actions when it came to describing problems. Experienced participants were also about accuracy, but the focus was more diverse than new participant concerns. Concerns about amount of content, local focus, and out of date content were listed as well as less

141 Texas Tech University, Michael R. Trice, August 2019 frequent concerns about malicious actors. Design participants were focused primarily upon organizing an active participant base, though organizational conflict in the form of disagreements and abuse of “power” were mentioned as well. Problems saw one of the most divergent set of responses overall moving from malicious actors to current content issues to organizational issues.

New participants saw the benefit of the wiki as being a searchable repository of local information. In this way, benefit aligned quite closely with purpose. Experienced participants had a diverse set of responses when it came to benefit. Primarily they were focused simply on the idea of local information, but ideas of culture, humor, and fun began to arise in the responses even if none of them trended throughout the answers. Design participants highlighted the uniqueness of the local information and that it arises from community members. The connection between the community as provider and subject was far more pronounced in the design participants.

6.2 Reviewing the Tasks

In Chapter 5 the review of results separated each task into quantitative and qualitative elements. In this section, I triangulate my analysis of the data by combining those quantitative and qualitative results with the information from the interviews. In this way a layered perspective of each task arises based on how participants described anticipated uses of a wiki, how the quantitatively performed when perusing those tasks, and how they qualitatively performed represented by participant workflows.

142 Texas Tech University, Michael R. Trice, August 2019

6.2.1 Task One: Find the Hours for Open Mic Night at the Garage Task One offered some of the most surprising results of the study. New participants took nearly twice as long as the other groups to complete the task (Table 5.3); however, this was largely due to the fact that only new participants completed the task successfully (Table 5.4). The reason for this failure rate was that the information for the task required following a link off-site from the wiki and only new participants did not give up their search or accept incomplete information as the successful end to the task.

Importantly, in post-task survey, all participant groups reported high confidence and satisfaction with the site, their ability to complete the task, and the ease of the task.

Participants also scored it low for improvement in repeated tasks, expressing confidence in their mastery. Additionally, all but one new participant vocalized concerns about the information being missing from the page, while only one participant (D4) from the other

10 participants did so. In essence, new participants demonstrated an increased ability to spot missing information and not accept the wiki as complete.

The workflow used to complete the task was uniform across participants

(Table 5.8). The process of typing the name of the bar into the search bar and then scanning the page was used by all participants. New participants struggled a bit with clicking on the proper search result to find the page, but they all rectified the process on their own with a second attempt. Only one other participant (D4) committed a search error. Finally, only three new participants completed the entire workflow by following a link to the business’ official website in order to find the information.

The results did reflect the new participants interview concerns about inaccurate information. They trusted the site less and that resulted in a more complete

143 Texas Tech University, Michael R. Trice, August 2019 information workflow. Experienced participants, who expressed concerns about incomplete information, were not as successful. The same is true of design participants, though they did not highlight incomplete information as a core concern.

6.2.2 Task Two: Change the Date Found on The Garage Page Task Two offered an opportunity to test the ease of editing on the site. There was little difference in core performance between groups, though new participants slightly under-performed versus the other groups. Experienced and Design participants took almost exactly the same time to complete the task (Table 5.3), 1.46 minutes for design participants and 1.45 minutes for experienced participants. New participants were slightly longer at an average of 2.16 minutes. Groups uniformly had no failures and had comparable max time between mouse inputs. New participants did click at substantially higher rates during this task.

Errors were common in this task, primarily participants expected the save button to be at the top of the editing box and not at the bottom (Table 5.9). A number of design participants also did not save changes, though they explained this error as a reluctance to make permanent changes in a live wiki after the task. Workflows were largely consistent with only one new participant becoming accidently creating a new page in error before following the standard path of clicking the edit, making the edit, and saving. The post-task survey suggested high satisfaction with the results across groups, though experienced participants did state they felt the task would be somewhat easier (Figure 5.11) to complete in the future compared to not at all easier for Task

144 Texas Tech University, Michael R. Trice, August 2019

One. Design (not at all easier) and New participants (slightly easier) held largely steady in their response.

All in all, the task demonstrated a capacity to edit a page across all groups at relative similar efficiencies. New and experienced participants were more likely to feel they improve in the future and also demonstrated less familiarity with the location of the save button. While experienced participants mentioned the importance of editing in the interviews, the task demonstrated a lack of experience with the task given the errors and belief that they improve performance in the future more in line with new participants.

6.2.3 Task Three: Add a Link from The Garage Page to any Other Page Task Three asked participants to connect two existing pages via hyperlink. The task built upon the editing in Task Two and addressed a core navigational requirement for the wiki: linked articles. The process proved difficulty. Two new participants and one experienced participant failed to complete the task (Table 5.4). The three remaining new participants completed the process as success with difficulty. In addition, two design participants also completed as success with difficulty. Task Three presented one of the two most technically challenging task for all participants, along with Task

Five. It received the highest difficulty rating of all tasks (Figure 5.12) with new and experienced participants rating it just below average difficulty and design participants rating it just below slightly easy.

Only three participants completed the task without committing an error. Errors fell into two categories (Table 5.11): adding a link and interacting with the save menu.

145 Texas Tech University, Michael R. Trice, August 2019

Eight participants (two design, two experienced, four new) struggled to add the link because they could not identify the icon that represented adding a link. Five participants (two from experienced and new, one from design) backed out of the editing page after adding the link without clicking the save button.

The workflow for participants was chaotic as most participants struggled to identify the process of edit, then highlight a term, then click the link button. Central to the issues were that one had to enter the editing page before highlighting the word to link, but the text in the editing page and the regular page resembled one another.

Additionally, the link icon was unrecognizable to at eight participants at first, as mentioned above. One design participant went so far as to edit the code source rather than use the editing interface.

Given the importance of hyperlinks to design and experienced participants in the interviews, the difficulties here were concerning. New participants also focused heavily upon navigation, but they framed search as a more important tool than links.

The difficulty of the process suggests that new participants’ decision to default to search over links might prove useful given the high learning curve of adding links between pages.

6.2.4 Task Four: Find the Businesses on The Square Task Four repeats Task One except that it offers a solution in the wiki. The task involved finding a “businesses on The Square”, which is information available on The

Square page, though participants could have used the wiki’s interactive city map as well. The core insight here was whether participants used search or navigation to find

146 Texas Tech University, Michael R. Trice, August 2019 the answer (The Square page is linked from the landing page of the wiki) and whether the process was easier for participants than Task One.

For design (0.87 minutes) and new (0.64) participants Task Four was the fastest task resolved of the entire set (Table 5.3). It also produced the lowest time-on- task across all participants (0.92 minutes) and the lowest standard deviation (0.67).

Experienced participants averaged higher on this task (1.25 minutes) due to one participant taking 2.99 minutes to complete the task. That participant committed an error after reaching the The Square page by clicking on a shopping link and then only recovered via close reading process of both the Shopping page and The Square page.

While no individuals failed to navigate to the page, one design participant attempted to use the interactive map to find the page before giving up and using the search function. In all, 14 of the 15 participants found The Square page using the search option, while one new participant found the link on the landing page. Using the link rather than the search resulted in the participant hitting completing the task in 0.66 minutes versus the new participant average of 0.65.

Task Four was in fact the one task were new participants completed the task more quickly than the other two groups, though not significantly. Task Four also rated high on post-task surveys with new participants rating the task as “very easy” versus the similar task in Task One that they ranked “easy” (Figure 5.13).

Overall, Task Four demonstrated a strong preference for search as means of initial navigation across all participants. Even design participants avoid using links or the map navigation in favor of a search. Participants also quickly located categorical

147 Texas Tech University, Michael R. Trice, August 2019 information on the wiki page with far more success than when the information was off-site. In comparison to the interviews, it once again challenged the focus on links as a navigational tool within this particular wiki. While three participants used links to explore, two were led astray by the process before correcting their error. This error path could point toward the possibility of serendipity in exploration, but in this case it was more a barrier to finding the desired content.

6.2.5 Task Five: Add a New Page for a Location to the Wiki (Can be a Fake Location) In Task Five I requested that the participants add a location to the wiki. The participants were instructed that it could be a fake location and that I would remove the entry after they completed the task if they desired. The goal was to encourage the participants to express themselves without concern for vandalizing the wiki.

Not surprisingly, Task Five was the longest task for all groups, though that was as much a result of participants adding content as it was with committing errors. That said design participants set the bar at 2.34 minutes (Table 5.3), experienced participants averaged 3.55 minutes, and new participants averaged 3.17 minutes. The standard of deviation was large across the entire group at 3.02 minutes, as two participants (N2 and E2) took 10.21 minutes and 9.71 minutes. By comparison, three new participants took less than a minute as they simply created the page without adding much content. While participants overall described the task as “easy”, it was the second lowest scoring task of the six in that category (5.16). Only Task Three’s

“add a link” task rated as harder.

148 Texas Tech University, Michael R. Trice, August 2019

Only one participant (D3) failed at the task (Table 5.4), completing the task by creating a link to an existing page when he discovered the business he wanted to create already existed. The participant technically completed the proper workflow for creating a new page, but since the name he liked from existed, it created a link to the existing page (as explored in Task Two). Three participants had successes with difficulty. N2 and N5 both started and canceled the edit process multiple times before discovering a solution, though they came to two different solutions.

Two ways existed to add a page to the Denton LocalWiki. One way was to click on the “search or create new page” icon, which presented a field above the search to create a new page. Participants had been using this icon and had seen this screen multiple times now due to Task One and Task Four. Searching for a page name that did not exist, also presented participants with the “create page” option. All but two participants created new pages one of these two forms of the search method. The other method was to create a link (as in Task Two) but highlight a name that doesn’t exist in the wiki. Two participants attempted this method, N2 and D3. As explained above, D3 failed to create a link to a new page but did create a link to an existing business. N2, who had the longest time on task at over 10 minutes, was the only participant to successfully create a new page by editing an existing, creating a new link, and following that link to create a new page. The longer exploration process led

N2 into what design and experienced participants described as the more preferred solution by generating new content with links to existing content. Note that no experienced nor design participants solved the task in this manner.

149 Texas Tech University, Michael R. Trice, August 2019

In addition to tracking the workflow on Task Five, I also noted what elements were added to the new page by participants (5.14). New participants added the least.

Only two new participants edited the body of the new page (N1 and N5). No other elements were edited by new participants. In the case of experienced participants, four edited the body, one added a table, and added the page to the interactive map. For design participants, four edited the body, one left a comment on the new page, and two added edited a table. Overall, additions were minor, though experienced and design participants edited more elements than new participants.

The task connected back to the interview in a few key ways. The task demonstrated that adding a new page was substantially harder for participants than editing a page. The task took longer to solve than Task two’s editing request and satisfaction was lower. Additionally, more participants failed at this task and more succeeded only after recovering from an initial workflow error. That contributing new content weighs so heavily in experienced and design participants concerns about purpose and content, the differences between editing and creating new pages is significant. Additionally, that 13 of 14 successful attempts created isolated pages unlinked to other content suggests a high barrier to creating linked navigation within the wiki for most participants across experience level (Figure 5.5).

6.2.6 Task Six: Address Any Concerns You Have with the Cranky Goose Page Task Six offered the participants an opportunity to deal with malicious content. For the task, I created a temporary page for a fake business and included content resembling a negative review of the business. The goal of the task was to see how

150 Texas Tech University, Michael R. Trice, August 2019 participants handled malicious content now that they had experience with the editing process from previous tasks. Overall, the task was most remarkable for reversing the trend from Task One. In the case of Task Six, three new participants failed the task by adding a comment on the site requesting that someone else moderate the page. All experienced and design participants addressed the issue either by adding or removing content.

Quantitatively, the task demonstrated that design participants (2.23 minutes) and experienced participants (3.11 minutes) took substantially more time than new participants (1.81 minutes) on average (Table 5.3). Additionally, experienced participants experienced longer average time between input (Figure 5.7) as they took much more time to decide upon an action than new and design participants. This result helps demonstrate how decisive new participants were in turning to moderators to fix the problem.

The post-task survey also demonstrated interesting results. New participants, on average, rated this task as the most difficult to complete. They also dropped their evaluation from “satisfied with the site” to “dissatisfied” on average (Figure 5.16).

This marked the only task across all groups where the site was marked below average in satisfaction. New participants explained afterwards that the task raised concerns about whether such content would be addressed. Experienced and design participants who made some form of change themselves demonstrated no such drop in satisfaction.

Qualitative, what stood out was the differences in type of editing across groups

(Table 5.16). Three new participants only added a comment for moderation. One new

151 Texas Tech University, Michael R. Trice, August 2019 participant flagged the content as biased with a page tag. Only one new participant added content to address the information on the page. By comparison experienced and design participant engaged in a range of edits.

A majority of design and experienced participants deleted content from the entry (three design and three experienced), added content to the entry (four experienced and two design), and addressed issues in the page’s table (four design and four experienced). Additionally, three experienced and two design participants made grammatical corrections. All in all, the design and experienced participants were considerably more willing to engage with the page across a spectrum of editorial interactions.

As the workflows (Table 5.17) and time on task demonstrate, the new participants did not have any issues finding the page, nor entering the editing environment. In fact, it is important to note that four of the five new participants did click the edit button and entering the environment. What they did not do was make edits. They left comments and saved, considering the task complete after saving. The fifth participant added a tag from outside the editing that marled the page as biased.

Based on the interviews, new participants remained committed to their belief in moderators even after they were familiar with the editing functionality of the site.

That said, this faith moderators appeared to come at a significant cost in satisfaction with the site per the post-task survey. Experienced and design participants, however, enacted the editing responsibility that indicated as important in their interviews. This editing included not only addressing the malicious content, but also completing

152 Texas Tech University, Michael R. Trice, August 2019 missing details (business hours and website), as well grammar corrections. Again, similar to their interview responses, it seems that experienced and design participants saw the editing as an accuracy issue and felt empowered to address it given their skill set from previous tasks.

6.2.7 System Review The SUS scores demonstrated a soft correlation across groups. No design participants rated the system under 80, two experienced participants did, and three new participants did, with one new participant offering a rating of 30. Thus, the satisfactions scores were quite high overall with new participants being the only group with a majority under 80 (Table 5.5).

While SUS scores align somewhat with experience in the groups, they also align with task failure if Task One is excluded (where most participants believed they had succeeded even when failing). Excluding Task One, no design participant had more than one combined failure or success with difficulty (Table 5.4). Of the experienced participants, E2 (SUS 67.5) had two successes with difficulty and E3

(SUS 75) had one failure. The other three experienced participants completed all tasks with a success. New participants complicate the SUS picture.

Overall, the lower SUS scores in new participants follow the pattern above. N1

(SUS 77.5) had two task failures. N2 (SUS 30) had no task failures but three successes with difficulty. N3 (SUS 70) had two task failures. However, N4 (SUS 95) had one failure and one success with difficulty. N5 (SUS 87.5) had no failures but two

153 Texas Tech University, Michael R. Trice, August 2019 successes with difficulty. Thus, the SUS scores for these two new participants break from the pattern for experienced and design participants.

As for task failures, the primary source of task failures was expectational failures, where the participants expectation of the system led them to fail the task.

Task One had the most failures with seven participants failing to complete the workflow to find the information off-site. Task Six had the second most with three failures, as new participants decided to rely upon non-existent moderators to edit the page. The task with the clearest functional issue was Task Three with two failures and five success with difficulty. The difficulty in adding links is a clear issue in a wiki that relies upon flow from one page to the next and also highlights part of the decency on search for navigation. Additionally, creating a new page resulted in one failure and three complete with difficulty. It is worth noting again that participants felt it would be

“easier” to add a page link on a second attempt, but only felt it would be “somewhat easier” to add a new page on a second attempt.

Finally, while there was a total of 11 tasks completed as success with difficulty, it is worth noting that system feedback was sufficient to prevent these from becoming failures. These bounce back attempts in combination with the optimism for improving results in Task Three do highlight system functionality that participants felt was learnable, with the notable exception of adding new pages.

Quantitative trends across the tasks largely demonstrated more exploration by new participants. Max time between input did not produce much meaningful insight, thought it did some interestingly long pauses between inputs for experienced

154 Texas Tech University, Michael R. Trice, August 2019 participants in Task Five and Task Six (Figure 5.7). It is possible this is an indication of indecision and/or reflection that differentiates experienced participants from the other two groups. I explore this further in Chapter 7.

6.3 Returning to the Research Question

Now that the interviews and tasks have been initially analyzed, it is worth returning to the central questions of this dissertation. The questions were:

1. In what ways do participants describe the platform environment and

expectations for that environment?

2. In what ways do participants actions align with their expectations for the

platform and its environment? In what ways do they not align?

3. In what ways do expectations and alignment vary by the classes of

participants?

Many answers arise from the data already presented, both in the interviews and in the performance of the tasks. For example, new participants alone expressed an expectation of a moderation class, and their performance in Task Six largely depended upon this belief. Similarly, while inaccurate information was a concern across all groups, only new participants were skeptical enough of wiki content to complete Task

One. Experienced and design participants also stated that link navigation was important, yet all classes both preferred search navigation during tasks and, when creating a page, all but two participants created a page without linking it to the rest of the wiki. These issues standout, but it is difficult to align how they interact with one

155 Texas Tech University, Michael R. Trice, August 2019 another. In essence, a more “ecological” way of seeing the system and its participants is needed.

The answer required another step to better illustrate a unified view of expectations versus performance between and across groups. To do this analysis, I return to the ANT conversation in the literature review. What I wanted to achieve was a mechanism of displaying the interview results and the performance results in a manner that could speak to genre expectations within the three groups. This process might then demonstrate how expectations were stable or variable across the groups.

In considering Miller’s (1984) work in the genre as social action, interviewing the three sets of participants helped highlight the different activities each perceived as occurring within the wiki. This would build upon Bahktinian (1986) notions of dialogic negotiation between the parties involved. But ANT’s desire to flatten the role of actors (Latour, 2005) meant including a fourth perspective, that of the system itself.

Thus, I included a map of how agents emerged when the participant groups directly interacted with the system. Additionally, I looked at how key “instability agents” drove participant behavior. Specifically, I looked at agents that were only identified by one of the three groups but managed to have a significant impact upon task completion. These instability agents could them be seen as important to the system even if they appeared to only be a concern of one of the three groups.

6.3.1 New Participant Maps In the case of new participants, we can identify a core set of actors from the interviews by including those elements names by at least three respondents. The actors have been

156 Texas Tech University, Michael R. Trice, August 2019 categorized into three types peg figures for social features, tools for technical features, and books for content/knowledge elements. These elements can even be used to express the purposes as described by readers, who primarily see the purpose as means to inform readers via searches about accurate local information, particularly places.

It can express the problems, inaccurate information or malicious actors that need to be addressed by moderators. Finally, it can express the benefits of accurate information accessible to people/readers.

Readers

Moderators Malicious Actors

Reviewers

Wiki for the Local Places New

Search Function Accurate Information

Navigation Options Inaccurate Information Articles

Figure 6.1. How New Participants Expected the Wiki to Look.

157 Texas Tech University, Michael R. Trice, August 2019

From this diagram, another diagram can be generated that maps the environment as performed by new participant’s in the six tasks. First, this adds editors as social functionality (Task Two, Three, and Five) and adds editing pages and comments as technical functionality. Additionally, given the success of Task One for

New Participants, it adds the outside content of the web. Finally, navigation can become link navigation as new participants regularly followed from page to page during their workflows.

Readers Malicious Actors Moderators

Outside Content Editors

Accurate Search Information Function

Wiki for the New

Comments Local Places

Links Editing Inaccurate Interface Information Articles

Figure 6.2. How New Participants Experienced the Wiki.

158 Texas Tech University, Michael R. Trice, August 2019

The diagrams demonstrate some key pieces of information about new participants as they engage with the wiki. Primarily that they garner awareness of technical functionality within the wiki fairly quickly, but they let go of social functionality with great hesitancy. In fact, new participants consistently gained technical proficiency and awareness in the study without increasing their sense of social responsibility to take on the role of moderator or to see editor and moderator as the same social role.

6.3.2 Experienced Participant Maps The same diagram generated for experienced participants demonstrates the increased range in specificity observed before. However, what might be surprising is that experienced participants tended to become more specific in content than in functionality. Experienced participants also acknowledged community members as social function in the interviews. The map captures the main purpose of connecting the community to events and providing the community local information.

159 Texas Tech University, Michael R. Trice, August 2019

Readers Community Community Members

Editors Events

Local Edit Organizations Interface Wiki for the Experienced

Articles Local Places

Local Links Information Missing Information

Figure 6.3. How Experienced Participants Perceived the Wiki.

Unlike with new participants, the differences between the perceived wiki and experienced wiki is minor for experienced participants. Only the use of tables in their editing tasks presented any significant additions for experience participants. While experienced participants struggled to add links when it came to creating pages, they utilized them regularly in navigation, though they preferred search to initiate navigation.

160 Texas Tech University, Michael R. Trice, August 2019

Readers Community Community Members

Editors Events

Edit Local Interface Organizations Wiki for the Experienced

Articles

Local Places Links Local Information Search Missing Tables Information

Figure 6.4. How Experienced Participants Experienced the Wiki.

6.3.3 Design Participant Maps Design participants offered a fairly simple and high-level description of the wiki within their interviews. While the answers were lengthy, the elements of the wiki they addressed were more conceptual than the other groups. In fact, design participants referred to only two key technical elements in photos and editing interface, while largely ignoring navigational issues like search and links. They also highlighted the

161 Texas Tech University, Michael R. Trice, August 2019 importance of organizers and creators in a way that was distinct from the other two groups.

Organizers Accurate Information Creators

Collaborators

Wiki for Places Designers

Editing Organizations Interface

Events Community Information History

Figure 6.5. How Design Participants Perceived the Wiki.

Design participants experienced the wiki in a fashion more similar to experienced participants. They utilized search, links, comments, and tables much like the other two groups. In fact, design participants utilized technical functions in a wider

162 Texas Tech University, Michael R. Trice, August 2019 variety than either of the two groups even though they were less likely to name technical functionality specifically when interviewed about the wiki, favoring to frame explanations in more social and organizational terms.

Creators

Organizations Organizers

Community Information Collaborators

Search History Function

Wiki for Designers

Comments Places

Links Events

Accurate Editing Information Tables Interface

Figure 6.6. How Design Participants Experienced the Wiki.

6.3.4 Overall Participant Map By taking those actors that appeared in two or more versions of the experienced wiki, a map of the overall experience of the wiki can be generated. In doing so a robust image of wiki characteristic form with social functions of reader, editor, and community member. The technical functions become comments, search, links, tables

163 Texas Tech University, Michael R. Trice, August 2019 and editing interface. Six types of content also inform the wiki: inaccurate information, accurate information, local place, local events, local organizations, and community information.

Readers Local Community Organizations Members

Community Information Editors

Accurate Search Information Function

Wiki for Participants

Comments Local Places

Links Local Events Inaccurate Editing Information Tables Interface

Figure 6.7. How Participants Experienced the Wiki.

Additionally, some of the elements that appeared in interviews for only one group can be mapped into a set of “instability agents”. This includes issues like malicious content, malicious actors, creators, organizers, and moderators. The value of highlighting these categories is that they also align well with key system

164 Texas Tech University, Michael R. Trice, August 2019 inconsistencies, both positive and negative. Task Six was driven largely by the instability agents of the new participants, but the instability agents of outside content also aided new participants in Task One. At the same time, all participants struggled in the creator role of generating a new page and in linking pages to one another. These functional tasks suggest the need for instability agents like organizers and creators, but the system itself is a key deterrent based upon these findings.

Moderators

Outside Creators Content

Ghosts in the Wiki

Organizers Malicious Content

Malicious Actors

Figure 6.8. Instability agents in the wiki.

165 Texas Tech University, Michael R. Trice, August 2019

In Chapter 7, I will walk through the three central questions of this dissertation using these maps to explain how this analysis answered those questions and to what extent. I will also explain how those answers inform questions about the role of genre in community wikis and how these results can help answer key questions in the field around online governance and system moderation.

166 Texas Tech University, Michael R. Trice, August 2019

CHAPTER 7

THE PREDICTIBLE INSTABILITY OF GENRE In this chapter, I discuss the findings in context of the three questions that opened this dissertation:

1. In what ways do participants describe the platform environment and

expectations for that environment?

2. In what ways do participants actions align with their expectations for the

platform and its environment? In what ways do they not align?

3. In what ways do expectations and alignment vary by the classes of

participants?

In doing so, I will address how the findings might inform the central issues raised in the literature review, particularly the role of genre in considering wikis, the issues around platform governance in our current digital media environment, and the value of combining directing usability results with ANT-style mapping to explore the manner in which genre shifts based upon the different social contexts of the groups within a platform.

167 Texas Tech University, Michael R. Trice, August 2019

7.1 In what ways do participants describe the platform environment and expectations for that environment?

A considerable amount of the work in this study is definitional, both in defining the perceptions of different classes of participants and defining how platforms interact to shape those perceptions when it comes to performance. Relying upon King &

Horrocks’ (2010) view that interviews offer a form of empowerment for communities to define the objects of their experience, I sought to see how different communities constructed the purpose, audience, and functionality of the a LocalWiki as a genre.

Still, it was important to keep Foddy’s (1993) concerns about scope in mind. A research interview must have some concept for what it is seeking to understand and know. As the goal was to assist in constructing a view of the LocalWiki as a genre, the questions focused upon elements that informed modern genre theory. This included

Miller’s (1984) observation that genre existed as communicative action, a response to high level rhetorical rules that respond to specific social context. As with others since

Miller, this view of genre was shaped by Schryer’s (1993) contribution that genres are stable for a moment in time but also evolving in reaction to social context and expectations. Schryer’s view connects with the field’s reconnection with Bakhtin, who considered speech genres utterances that exist in anticipation to other utterances, this anticipation generating social context and rules that inform the dialogic interaction and meaning-making of the utterances. In fact, one of the challenges that influence digital platforms as genre is how much stability is possible and what does stability mean in such a disruptive media era? The question of who forms the cultural rules that set

168 Texas Tech University, Michael R. Trice, August 2019 genre within a social context played a crucial in the design of the study, opening the establishment of rules to new, experienced, design participants, as well as the platform itself. In short, how stable were the rules informing the utterances that informed activity within the LocalWiki?

However, genres as single utterances had limitations. Bazerman (1994) would highlight these limitations in exploring the interconnected nature of legal genres to highlight how genres tended to work within systems of genres. Indeed, a genre might anticipate a reaction but that reactions may well be of a different genre: a meeting might lead to the exchange of several emails and conversations that could lead to a memo requesting a formal report. The report itself might be constructed from an assortment of utterances of varied genres. Spinuzzi and Zachry (2000) would place related concerns in a framework of genre ecologies by looking at software documentation. Building upon the terminology of software and networks, they would explore the need to recognize decentralized and unofficial genres as capable of forming stable genre ecologies—networks of genres contingent upon a need to respond. This path would lead to a variety of studies looking at networks and ecologies of documentation and technical work (Sherlock, 2009; Hart-Davison et al.,

2009; Potts, 2009; Mason, 2013). Keeping this history in mind, we can revisit the interviews to answer this question by asking: what are the points in the network that participants use to define LocalWiki and how are they contingent upon one another?

New participants collectively defined the LocalWiki as a construct (articles, navigation, search) of others (moderators, reviewers, malicious actors) that allowed a

169 Texas Tech University, Michael R. Trice, August 2019 participant to search for accurate/inaccurate information (largely places). Centralized activities for new participants revolved explicitly around search and informing with implicit tendencies that echoed lurker behaviors, they would read and consume material, but they would not contribute. These expectations differed from experienced participants who defined the wiki as an editable platform (specific pages, links, editing interface) of a community (readers, editors, community members) that allowed a community member to connect and provide local information about events, activities, and organizations—though experienced participants established concerns about missing information.

Between these two communities we see that providing information remains the central purpose for the LocalWiki, though experienced participants expand that purpose to include the editing of information. Additionally, the threat of misinformation exists within both groups, though with significantly different sources.

New participants react to the existence of malicious actors and the guardianship of moderators, while experienced participants concern themselves more with missing information. This difference in perception of content and fellow actors suggests that the two classes conceive of divergent social context even if the exigence of the

LocalWiki is shared.

Finally, design participants perceived the LocalWiki much less as an editing platform (editing interface, pictures) than an organized community project (organizers, creators, collaborators) that allowed the community to collaborate and explore accurate information about history, events, places, and organizations. For design

170 Texas Tech University, Michael R. Trice, August 2019 participants, the central problems were participation and organization. The design participants definitions operated at a different level than new or experienced participants, moving from a focus upon the information to a focus upon the organization of the platform’s community. The most surprising expression of this difference likely manifested in how design participants focused far less upon the technical functionality of the platform in comparison to new and experienced participants. In effect, design participants took a decided more social view of the

LocalWiki versus the localized instance view of new and experienced participants. A disconnect that speaks to why ANT maps helped bridge the divide.

7.2 In what ways do participants’ actions align with their expectations for the platform and its environment? In what ways do they not align?

Sadly, the LocalWiki platform could not be interviewed for this study. To understand how the platform informed the utterances of each constituency, the study required a mechanism for gauging the interaction between participant expectations of the

LocalWiki and the platform’s functionality. Fortunately, Usability Studies provided ample direction for this investigation. By running a collection of new, experienced, and design participants through 6 tasks, the dialogic interaction of each group could be evaluated in conversation with the platform itself.

Beyond the rhetorical theory framing this process, it offered an opportunity to examine how participants “hack” a system to fit their purpose. Much like activists on social media, each group had the opportunity to bend the participant interface and functionality of the system to their own ends.

171 Texas Tech University, Michael R. Trice, August 2019

The most stable conversation existed with experienced participants.

Experienced participants operated within the wiki essentially in the same way that they defined the wiki within interviews. They utilized search more often than they articulated in interviews and they struggled a bit in regard to their concerns about missing information. Experienced participants, however, provided the strongest argument for a stable genre within the LocalWiki as a whole. They were able to articulate the expectations and conventions of the wiki while delivering upon those expectations throughout most tasks.

New participants diverged the most across the two groups. The definition of the LocalWiki by new participants was simple and fairly limited across elements of functionality, content, and people. Yet, in actively experiencing the wiki, new participants created a far more complex network. The complexity was not limited to wiki functionality either. While new participants added functional knowledge about comments, links, and editing interface, they also expanded their content map in unique ways. The inclusion of outside web content was a unique quality for new participants.

Additionally, new participants maintained the moderator within their experienced network by incorporating commenting functionality as a means to enact the belief of a moderator within the wiki. The persistence in the belief of a moderator offers one the starkest examples for how the different groups experiences significantly different ecologies within the wiki.

Design participants also expanded their network map significantly from interview to experience; however, this growth differed in key ways from the growth of

172 Texas Tech University, Michael R. Trice, August 2019 new participants. Design participants routinely excelled at tasks, except for Task 1.

They essentially mirrored experienced participants in overall performance, though they did perform slightly better (not significantly better, however). The difference between interview and experience was one of the social versus the local. The interviews focused upon organizational issues, but the tasks forced design participants to engage with the basic technical elements of the wiki (editing interface, links, search, comments, etc.) that constituted much of the manner in which both new and experienced participants defined as the wiki within their interviews. This shift in focus for design participants highlights a genuine concern about the way in which the designers focused on mission and social experience versus the closer localized experience of those who use the system. One issue here that we see broadly in social platforms is that participants often differ in their goals from the designers of a platform, whether activists, governments, or malicious actors. By focusing on the social level, designer might miss the mechanisms other participants use to achieve a variety of purposes.

7.3 In what ways do expectations and alignment vary by the classes of participants?

The most important takeaways about how expectations and alignment varied (and were stable) between classes were:

• The stability of experienced participants’ expectations and performance

• The stability of performance between experienced and design participants in

tasks

173 Texas Tech University, Michael R. Trice, August 2019

• The commitment new participants showed in believing in moderators

• The increase of technical agents in designer participants’ experience

• The struggles of experienced and design participants with incomplete

information

The stability in performance and expectation for experienced participants certainly suggests that sufficient conventions exists, for those familiar with those conventions, to see the LocalWiki as a table genre. Experienced participants could articulate expectations and meet those expectations in a way that demonstrated that navigation, editing, and consumption of information each offered an opportunity for a communicative act with methods to anticipate and fulfill a reaction. Certainly, the robustness of types of communicative acts within the LocalWiki speaks more to an ecology of genres (search, links, editing interface, article content) than a single genre

(wiki), but each of those genres were sufficiently stable for experienced participants to anticipate and perform them. The issue, of course, is that the genre ecology was only stable for experienced participants.

That said, stability of performance existed between experienced and design participants. Not only did experienced and design participants excel in the same performative aspects, they struggled similarly as well by failing to seek content outside the wiki in Task 1 and by frequently limiting navigation to search over following links or using the map to navigate. Thus, while design participants anticipated the wiki at a social level, this variance in view still resulted in design

174 Texas Tech University, Michael R. Trice, August 2019 participants responding to the same LocalWiki as experienced participants. It also highlights the limitations in Usability Studies to effectively verify the social level in task analysis. While social actors like reviewers, editors, moderators, and malicious actors can be built into the tasks, outside organization requires a more ethnographic approach.

In regard to social actors, the persistence of new participants to enact moderators via action and inaction offered interesting insights. One way in which experienced participants varied from design participants was in the use of comments, which no experienced participant used but two design participants used. Design participants used the comments to justify changes made in the wiki. New participants also used the comment functionality, but they did not use it to enact justifications or explanations, rather they used the comment to signal the need for moderator action.

This offered the clearest example of a single functionality operating as two different genres based upon the class of participant. This distinction also existed within the editing interface functionality. Experienced and design participants saw the editing interface as a medium for composing a correction to malicious content. New participants, however, felt that the comment field and moderator created a contingency for the comment as request to moderator. It is worth noting that the fact that moderators do not exist as a reality within the system does not negate the rise of a stable genre for new participants based upon the ecology that assembled for themselves.

175 Texas Tech University, Michael R. Trice, August 2019

The shift in system views for design participants is worth revisiting briefly.

Again, the value Latour assigns to ANT is the ability to shift between the local and the social, which design participants perform between their interviews and performative results. The explosion of technical features in the maps of design participants reflects this shift in clear terms. In a similar manner, it also highlights how localized the views of new and experienced participants tend to be when focused on the LocalWiki. They acknowledge community and types of community content, but they do not acknowledge the social making process outside the LocalWiki in the way that design participants did in interviews, governance and organization were not a focus for classes beyond design participants. One question that this issue raises is how to validate the social concerns of the design participants. Additionally, it raises questions about the extent to which design participants can anticipate the ways in which new and experienced participants embed the social within functionality. For example, how new participants create a request for moderation via the contingency of faux moderators and comment fields.

Another example of unexpected contingency is the manner in which both experienced and design participants failed to seek missing information outside the

LocalWiki, thus not seeing external web pages as a possible response to a sequence of utterances within the wiki. In this admittedly limited test case, the wiki was treated as a closed system when seeking information. When creating a page, perhaps the greater web would open, but the strong tendency to shut out the rest of the web even when

176 Texas Tech University, Michael R. Trice, August 2019 links to outside websites are available (such as Task 1), raises significant questions about how the boundary of ecologies informs the behavior of participants.

7.4 Shifting Genre in the LocalWiki

As revisited in section 7.1, Technical Communication theory of genre recognizes that genre is responsive to or enacted by social context, recognizable in some stable fashion, and exists in connection to other related genres, whether as a direct network or an ecology of contingencies. Yet, as discussed in Chapter 2, much has been said about the wiki as a genre that is also worth revisiting. Wikis, as a genre, have been defined by who has access to contribute to them (Poole & Grudin, 2010; Mader, 2009;

Cummings, 2009), the discrete service of reading and editing they offer (Ferro &

Zachry, 2014), and civic discourses (Barton & Heiman, 2012). What unites these definitions is the tendency to highlight the collaboration that occurs within wikis as something sufficiently discrete to generate a set of anticipable conventions that suggest a genre based upon how it is read and edited.

Indeed, the interviews demonstrated that all groups broadly anticipated the concepts of reading and review of content. Yet, the study also highlights that significant divides exist around how reading and review manifest within wikis. The differences in how the wiki ecology functions and how participants experience the activity of the wiki suggest that the genre definitions as they exist are too broad (too global) at times. We know that some participants only experience wikis as readers and lurkers, others may edit but not create, and others still may create but not participate in governance. The acts of reading, editing, content creation, and wiki organization all

177 Texas Tech University, Michael R. Trice, August 2019 exist both contingent and independent of one another depending upon how one both anticipates the wiki and performs within the wiki. In particular, the social, technical, and content actors a participant anticipates and engages with significantly shapes these genre-defining conditions of purpose, content, exigency.

The LocalWiki in this study is reasonably stable as it relates to experienced and design participants at the performative level, though questions remain about its stability at the organizational level. Given design participants focus on the role of organization, it is worth noting again that this LoaclWiki was orphaned after its two founders left, leaving it without a visible, directed organizational layer. The design participants imagine an organization layer within the community, somewhat similar to the moderators of the new participants, by leaving comments that justify particular changes within the wiki. If anything, the imagined organization layer of design participants helps justify the “realness” of the faux moderators created by new participants for the same purpose. Each envisions an audience for a necessary genre within the wiki, a response to organization—whether resolving conflict verify appropriate changes.

The LocalWiki ecology then offers a number of genre responses across all participant types in reading, editing, link navigation, search navigation, conflict resolution, and content creation. At a high level, the exigence for each of these discrete and contingent acts generates a stable wiki genre ecology. However, each participant group sees the ways and means to manifest these activities differently, and the exact

178 Texas Tech University, Michael R. Trice, August 2019 ecology each class operates within varies in important ways, creating discrete genre ecology specific to class itself.

7.5 Governance and the LocalWiki

When Zittrain (2008) discussed netizenship as a phenomenon on Wikipedia, it is worth noting that he was discussing a substantially different phenomenon depending upon who was engaged in the process. The remarkable governance he described on

Wikipedia existed, but how a reader, new editor, experienced editor or board member experienced that governance was anything but universal. This matters due to Coleman and Blumler’s (2009) explanation that democratic communication requires accountability and transparency to all stakeholders. The governance of digital platforms must acknowledge the systems of agents that different classes use to manifest the wiki genre. This acknowledgement might mean attempts to remove agents and alternate contingencies, but it must do so in a way that understands the purposes and contexts that drives these networks are real to the participants who engage with them. That moderators do not exist in the LocalWiki is no excuse for the platform and its designers to not only account for “correcting” new participant behavior but also in being accountable to reasons new participants include moderators in their perceived and performative networks within the wiki.

179 Texas Tech University, Michael R. Trice, August 2019

7.6 Usability Studies and ANT as a Way to Investigate Platform

Governance

The utility of combining task-based Usability Studies and ANT as I have done in this dissertation is that it allows for a specific comparison of perspectives across participants while also decentralizing the authority of designers and expertise. It is vital to state that flattening the authority of expertise by no means resulted in ignoring expertise. As the results and analysis indicated, expert participants performed exceptionally well in describing the system as it existed and performing within the confines of that description. However, the study also allowed for a review of new and design participants in a manner less commonly explored. By allowing new participants to define the system in advance of their performance, the study generated insights into why new participants acted in particular fashions and how their interaction with the

LocalWiki manifested the genres they perceived and enacted. Additionally, by asking design participants to perform within the system, it exposed some disconnects between designer intent and practical performance.

The limitations of the study also warrant discussion. By looking at a single platform, the specific perceptions are not generalizable, but the manner in which perceptions and performance manifested and how it differed across classes is informative to other spaces, especially those spaces that Ferro and Zachry refer to as service genres. Participants regularly utilize digital services for purposes well beyond the designed intent. This study offers a means to explore how the perceptions and performance of different classes of participants can illuminate the extent of

180 Texas Tech University, Michael R. Trice, August 2019 divergence, even to the point of existing within different genre ecologies than those perceived by the designers. That said, much more study needs to be done about how this divergence manifests in other wikis and other digital service genres. Again, some of the limitations of this study will influence that expansion. Interviewing is resource intensive and many of the most compelling divergent classes online (trolls, activists, political operatives, and criminals) are difficult to interview. Even more difficult is gaining access in order to request that a sufficient number of these groups agree to perform even remote tasks for study. Still, the possibility of continuing this method with more accessible groups offers potential. It is also worth the effort to understand that the genre and workplace theory of Technical Communication offers vast potential in understanding non-traditional knowledge work and how it manifests in divergence to the intention of system designers. Tracing and acknowledging this divergence could potentially help system designers build libraries of unanticipated use cases and edge cases that acknowledge the contingent genres that creative participants might generate.

The better system designers are at anticipating and addressing these use cases, the fewer unintended side effects might manifest from digital platforms.

181 Texas Tech University, Michael R. Trice, August 2019

REFERENCES

Agboka, G. Y. (2014). Decolonial methodologies: Social justice perspectives in intercultural Technical Communication research. Journal of Technical Writing and Communication, 44(3), 297–327.

Andersen, R. (2014). Rhetorical work in the age of content management: Implications for the field of technical communication. Journal of Business and Technical Communication, 28(2), 115-157.

Andreasen, M. S., Nielsen, H. V., Schrøder, S. O., & Stage, J. (2007, April). What happened to remote usability testing?: An empirical study of three methods. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1405–1414). ACM.

Bahadur, N. (2015, March 13). This is how trolls treat women on the Internet. The Huffington Post. Retrieved from https://www.huffpost.com/entry/being-a- woman-online-really-sucks_n_7265418?guccounter=1

Bakhtin, M. M. (1986). Speech genres and other late essays, trans. Caryl Emerson and Michael Holquist. Austin, TX: University of Texas Press.

Balderas, N. (2011, November 1). LocalWiki project takes off in Denton. North Texas Daily. Retrieved from https://www.ntdaily.com/local-wiki-project-takes-off-in- denton

Ball, C. E. (2012). Assessing scholarly multimedia: A rhetorical genre studies approach. Technical Communication Quarterly, 21(1), 61–77.

Balzhiser, D., Polk, J. D., Grover, M., Lauer, E., McNeely, S., & Zmikly, J. (2011). The Facebook papers. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 16(1). Retrieved December 18, 2011 from http://www.technorhetoric.net/16.1/praxis/balzhiser-et-al

Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of Usability Studies, 4(3), 114–123.

Barnum, C. (2002). The ‘magic number 5’: Is it enough for web-testing? Information Design Journal, 11(2), 160–170.

Barnum, C. M., & Palmer, L. A. (2011). Tapping into Desirability in User Experience. In M. Albers & B. Still (Eds.), Usability of complex information systems: Evaluation of user interaction (pp. 253-280). Boca Raton, FL: CRC Press.

182 Texas Tech University, Michael R. Trice, August 2019

Barton, M., & Cummings, R. E. (2008). Wiki writing: Collaborative learning in the college classroom. Ann Arbor, MI: University of Michigan Press.

Barton, M. D., & Heiman, J. R. (2012). Process, product, and potential: The archaeological assessment of collaborative, wiki-based student projects in the technical communication classroom. Technical Communication Quarterly, 21(1), 46–60.

Bazerman, C. (1994). Systems of genres and the enactment of social intentions. In A Freedman & P. Medway (eds), Genre and the new rhetoric (pp. 79–101). Bristol, PA: Taylor & Francis.

Berkman, M. I., & Karahoca, D. (2016). Re-assessing the usability metric for user experience (UMUX) scale. Journal of Usability Studies, 11(3), 89–109.

Blok, A. (2010). Topologies of climate change: actor-network theory, relational-scalar analytics, and carbon-market overflows. Environment and Planning D: Society and Space, 28(5), pp. 896–912.

Brooke, J. (1996). SUS–A quick and dirty usability scale. Usability evaluation in industry, 189(194), 4–7.

Bush, V. (1945). As we may think. The Atlantic Monthly, 176(1), 101–108.

Castells, M. (2010). Communication power. Oxford, UK: Oxford University Press.

Chaurasia, A. (2017, January 17). Leonardo DiCaprio’s Wikipedia page defaced by a fan. Times of India. Retrieved from https://timesofindia.indiatimes.com/entertainment/english/hollywood/news/Le onardo–DiCaprios–Wikipedia–page–defaced–by–a– fan/articleshow/51189482.cms

Chess, S., & Shaw, A. (2015). A conspiracy of fishes, or, how we learned to stop worrying about #GamerGate and embrace hegemonic masculinity. Journal of Broadcasting & Electronic Media, 59(1), 208–220.

Chess, S., & Shaw, A. (2016). We are all fishes now: DiGRA, feminism, and GamerGate. Transactions of the Digital Games Research Association, 2(2).

Clark, T., & Stewart, J. (2010). Using document design to create and maintain wikis. Business Communication Quarterly, 73(4), 453–456.

Coleman, S., & Blumler, J. G. (2009). The Internet and democratic citizenship: Theory, practice and policy. Cambridge, UK: Cambridge University Press.

183 Texas Tech University, Michael R. Trice, August 2019

Coles, B. A., & West, M. (2016). Trolling the trolls: Online forum users constructions of the nature and properties of trolling. Computers in Human Behavior, 60, 233-244.

Collins, L., & Nerlich, B. (2015). Examining user comments for deliberative democracy: A corpus–driven analysis of the climate change debate online. Environmental Communication, 9(2), 189–207.

Cooke, L. (2010). Assessing concurrent think–aloud protocol as a usability test method: A Technical Communication approach. IEEE Transactions on Professional Communication, 53(3), 202–215.

Couts, A. (2015, July 3). The great Reddit meltdown has begun. The Daily Dot. Retrieved from https://www.dailydot.com/news/reddit–revolt–blackout–2015– ama–victoria

Decarie, C. (2012). Dead or alive: Information literacy and dead (?) celebrities. Business Communication Quarterly, 75(2), 166-172.

Dumas, J. S., Dumas, J. S., & Redish, J. (1999). A practical guide to usability testing. Portland, Oregon: Intellect Books.

Edwards, D. W., & Gelms, B. (2018). Special issue on the rhetoric of platforms. Present Tense. 6(3). Retrieved from http://www.presenttensejournal.org/editorial/vol-6-3-special-issue-on-the- rhetoric-of-platforms/

Eraslan, S., Yesilada, Y., & Harper, S. (2016, March). Eye tracking scanpath analysis on web pages: how many users? In Proceedings of the ninth biennial ACM symposium on eye tracking research & applications (pp. 103–110). ACM.

Faris, R., Roberts, H., Etling, B., Bourassa, N., Zuckerman, E., & Benkler, Y. (2017). Partisanship, propaganda, and disinformation: Online media and the 2016 US presidential election. Retrieved from https://dash.harvard.edu/bitstream/handle/1/33759251/2017- 08_electionReport_0.pdf

Faulkner, L. (2003). Beyond the five–user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 35(3), 379–383.

Ferro, T., & Zachry, M. (2014). Technical Communication unbound: Knowledge work, social media, and emergent communicative practices. Technical Communication Quarterly, 23(1), 6–21.

184 Texas Tech University, Michael R. Trice, August 2019

Friesen, E. L. (2017, September). Measuring AT usability with the modified system usability scale (SUS). In AAATE Conference (pp. 137–143).

Frith, J. (2014). Social network analysis and professional practice: Exploring new methods for researching Technical Communication. Technical Communication Quarterly, 23(4), 288–302.

Foddy, W. (1993). Constructing questions for interviews. Cambridge, UK: Cambridge University Press.

Gillespie, T. (2010). The politics of ‘platforms.’ New Media & Society, 12(3), 347– 364.

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven, CT: Yale University Press.

Haak, M. J., & Jong, M.D. T. (2003). Exploring two methods of usability testing: Concurrent versus retrospective think–aloud protocols. IPCC 2003 The Shape of Knowledge Proceedings: 2003 IEEE International Professional Communication Conference, 285–287.

Haas, A. M., & Eble, M. F. (Eds.). (2018). Key theoretical frameworks: Teaching Technical Communication in the twenty–first century. Logan, UT: Utah State University Press.

Hackos, J. T., & Redish, J. (1998). User and task analysis for interface design.

Hart–Davidson, W., Bernhardt, G., McLeod, M., Rife, M., & Grabill. J. T. (2007). Coming to content management: Inventing infrastructure for organizational knowledge work. Technical Communication Quarterly, 17(1), 10–34.

Jadin, T., Gnambs, T., & Batinic, B. (2013). Personality traits and knowledge sharing in online communities. Computers in Human Behavior, 29(1), 210-216.

Jenkins, H., & Thorburn, D. (2004). Democracy and new media. Cambridge, MA: MIT Press.

Jhaver, S., Chan, L., & Bruckman, A. The view from the other side: The border between controversial speech and harassment on Kotaku in Action. arXiv preprint arXiv:1712.05851, 2017.

Jones, J. (2009). Patterns of revision in online writing: A study of Wikipedia’s featured articles. Written Communication, 25(2), 262–289.

185 Texas Tech University, Michael R. Trice, August 2019

Jones, J. (2014). Switching in Twitter’s hashtagged exchanges. Journal of Business and Technical Communication, 28(1), 83–108.

Jones, N. N., & Walton, R. (2017). Using narratives to foster critical thinking about diversity and social justice. Key Theoretical Frameworks: Teaching Technical Communication in the Twenty–First Century, pp. 241–267. Louisville, CO: University of Colorado Press.

Kankanhalli, A., Tan, B. C., & Wei, K. K. (2005). Contributing knowledge to electronic knowledge repositories: An empirical investigation. MIS quarterly, 29(1).

Kaya, A., Ozturk, R., & Gumussoy, C. A. (2019). Usability measurement of mobile applications with system usability scale (SUS). In Industrial Engineering in the Big Data Era (pp. 389–400). Springer International Publishing: Switzerland.

King, N., & Horrocks, C. (2010). Interviews in qualitative research. London, UK: Sage.

Kortum, P. T., & Bangor, A. (2013). Usability ratings for everyday products measured with the System Usability Scale. International Journal of Human–Computer Interaction, 29(2), 67–76.

Kushner, D. (2015, March 13). 4chan’s overlord Christopher Poole reveals why he walked away. Rolling Stone. Retrieved from https://www.rollingstone.com/culture/culture–features/4chans–overlord– christopher–poole–reveals–why–he–walked–away–93894

Lam, S. T. K., Uduwage, A., Dong, Z., Sen, S., Musicant, D. R., Terveen, L., & Riedl, J. (2011, October). WP: clubhouse? An exploration of Wikipedia’s gender imbalance. In Proceedings of the 7th international symposium on Wikis and open collaboration (pp. 1–10). ACM: New York, NY.

Lanham, R. (2006). The economics of attention. Chicago, IL: The University of Chicago Press.

Latour, B. (2005). Reassembling the social: An introduction to Actor–Network– Theory. Oxford, UK: Oxford Press.

Larusson, J. A., & Alterman, R. (2009). Wikis to support the “collaborative” part of collaborative learning. International Journal of Computer–Supported Collaborative Learning, 4(4), 371–402.

186 Texas Tech University, Michael R. Trice, August 2019

Law, J., & Hassard, J. (1999). Actor network theory and after. Oxford, UK: Blackwell Publishers.

LocalWiki. (NA). Dashboard. Retrieved from https://localwiki.org/_tools/dashboard/davis

Mason, J. (2013). Video games as technical communication ecology. Technical Communication Quarterly, 22(3), 219-236.

McDaniel, R., & Daer, A. (2016). Developer discourse: Exploring technical communication practices within video game development. Technical Communication Quarterly, 25(3), 155-166.

Mader, S. (2009 , January). Your wiki isn’t Wikipedia: How to use it for Technical Communication. Intercom: the magazine of the Society for Technical Communication, 56, 14–15.

Majchrzak, A., Faraj, S., Kane, G. C., & Azad, B. (2013). The contradictory influence of social media affordances on online communal knowledge sharing. Journal of Computer-Mediated Communication, 19(1), 38-55.

Manion, C. E., & Selfe, R. D. (2012). Sharing an assessment ecology: Digital media, wikis, and the social work of knowledge. Technical Communication Quarterly, 21(1), 25-45.

Massanari, A. (2017). #GamerGate and the fappening: How reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.

McGann, L. (2010, June, 18). Knight news challenge: Is a wiki site coming to your city? Local Wiki will build software to make it simple. NiemanLab. Retrieved from https://www.niemanlab.org/2010/06/knight–news–challenge–is–a–wiki– site–coming–to–your–city–local–wiki–will–build–software–to–make–it– simple

Miller, C. R. (1984). Genre as social action. Quarterly journal of speech, 70(2), 151– 167.

Mirel, B. (2008). New frontiers in usability for users’ complex knowledge work. Journal of Usability Studies, 3(4), 149–151.

Mortensen, T. E. (2016). Anger, fear, and games: The long event of #GamerGate. Games and Culture, 13(8), 787–806.

187 Texas Tech University, Michael R. Trice, August 2019

Muller, M. (2012, February). Lurking as personal trait or situational disposition: lurking and contributing in enterprise social media. In Proceedings of the ACM 2012 conference on computer supported cooperative work (pp. 253–256). ACM.

Nielson, J. (2000). Designing web usability. Indianapolis, IN: New Riders.

Phillips, W. (2015). This is why we can’t have nice things: Mapping the relationship between online trolling and mainstream culture. Cambridge, MA: MIT Press.

Poole, E. S., & Grudin, J. (2010, July). A taxonomy of Wiki genres in enterprise settings. In Proceedings of the 6th international symposium on wikis and open collaboration (p. 14). ACM.

Potts, L. (2009). Using actor network theory to trace and improve multimodal communication design. Technical Communication Quarterly, 18(3), 281–301.

Potts, L. (2013). Social media in disaster response: How experience architects can build for participation. Routledge: New York, NY.

Potts, L., & Jones, D. (2011). Contextualizing experiences: Tracing the relationships between people and technologies in the social web. Journal of Business and Technical Communication, 25(3), 338–358.

Quinn, Z. (2017). Crash override: How GamerGate (nearly) destroyed my life, and how we can win the fight against online hate. New York, NY: Hachette.

Read, S., & Swarts, J. (2015). Visualizing and tracing: Research methodologies for the study of networked, sociotechnical activity, otherwise known as knowledge work. Technical Communication Quarterly, 24(1), 14–44.

Reagle Jr, J. M. (2015). Reading the comments: Likers, haters, and manipulators at the bottom of the web. Cambridge, MA: MIT Press.

Redish, J. (2010). Technical Communication and usability: Intertwined strands and mutual influences commentary. IEEE Transactions on Professional Communication, 53(3), 191–201.

Rivers, N., & Söderlund, L. (2016). Speculative usability. Journal of Technical Writing and Communication, 46(1), 125–146.

Rose, E. J., & Walton, R. (2015, July). Factors to actors: Implications of posthumanism for social justice work. In Proceedings of the 33rd Annual International Conference on the Design of Communication (p. 33). ACM.

188 Texas Tech University, Michael R. Trice, August 2019

Rude, C. D. (2009). Mapping the research questions in Technical Communication. Journal of Business and Technical Communication, 23(2), 174–215.

Saldaňa, J. (2009). The coding manual for qualitative researchers. Lontoo: Sage.

Sano–Franchini, J. (2018). Designing Outrage, Programming Discord: A Critical Interface Analysis of Facebook as a Campaign Technology. Technical Communication, 65(4).

Sauro, J. (2011). A practical guide to the system usability scale: Background, benchmarks & best practices. Denver, CO: Measuring Usability.

Sauro, J., & Lewis, J. R. (2009, April). Correlations among prototypical usability metrics: evidence for the construct of usability. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1609–1618). ACM.

Schryer, C. F. (1993). Records as genre. Written communication, 10(2), 200–234.

Sherlock, L. (2009). Genre, activity, and collaborative work and play in World of Warcraft: Places and problems of open systems in online gaming. Journal of Business and Technical Communication, 23(3), 263–293.

Six, J. M., & Macefield, R. (2016). How to Determine the Right Number of Participants for Usability Studies. Retrieved from https://www.uxmatters.com/mt/archives/2016/01/how–to–determine–the– right–number–of–participants–for–usability–studies.php

Soares, M. (2018, August). Evaluation of usability of two therapeutic ultrasound equipment. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). Springer International: Switzerland.

Spinuzzi, C. (2003). Tracing genres through organizations: A sociocultural approach to information design. Cambridge, MA: MIT Press.

Spinuzzi, C. (2004, October). Four ways to investigate assemblages of texts: Genre sets, systems, repertoires, and ecologies. In Proceedings of the 22nd annual international conference on Design of communication: The engineering of quality documentation (pp. 110–116). ACM.

Spinuzzi, C. (2005). The methodology of participatory design. Technical Communication, 52(2), 163–174.

Spinuzzi, C., Hart–Davidson, W., & Zachry, M. (2006, October). Chains and ecologies: Methodological notes toward a communicative–mediational model

189 Texas Tech University, Michael R. Trice, August 2019

of technologically mediated writing. In Proceedings of the 24th annual ACM international conference on Design of communication (pp. 43–50). ACM.

Still, B. (2010). Mapping usability: An ecological framework for analyzing user experience. Usability of complex information systems: Evaluation of user interaction. Boca Raton, FL: CRC Press.

Still, B., & Koerber, A. (2010). Listening to students: A usability evaluation of instructor commentary. Journal of Business and Technical Communication, 24(2), 206–233.

Sun, N., Rau, P. P. L., & Ma, L. (2014). Understanding lurkers in online communities: A literature review. Computers in Human Behavior, 38, 110-117.

Thom–Santelli, J., Cosley, D. R., & Gay, G. (2009, April). What’s mine is mine: territoriality in collaborative authoring. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1481–1484). ACM.

Trice, M. (2015, July). Putting GamerGate in context: how group documentation informs social media activity. In Proceedings of the 33rd Annual International Conference on the Design of Communication (p. 37). ACM.

Trice, M. (2016). Evaluating Multilevel User Skill Expression in a Public, Unsupervised Wiki: A Case Study. IEEE Transactions on Professional Communication, 59(3), 261-273.

Tufekci, Z. (2013). “Not this one” social movements, the attention economy, and microcelebrity networked activism. American Behavioral Scientist, 57(7), 848–870.

Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. New Haven, CT: Yale University Press.

Tullis, T., Fleischman, S., McNulty, M., Cianchette, C., & Bergel, M. (2002, July). An empirical comparison of lab and remote usability testing of web sites. In Usability Professionals Association Conference.

Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press.

Van Den Haak, M., De Jong, M., & Jan Schellens, P. (2003). Retrospective vs. concurrent think–aloud protocols: testing the usability of an online library catalogue. Behaviour & information technology, 22(5), 339–351.

190 Texas Tech University, Michael R. Trice, August 2019

Van Mierlo, T. (2014). The 1% rule in four digital health social networks: an observational study. Journal of medical Internet research, 16(2), e33.

Venturini, T. (2010). Diving in magma: how to explore controversies with actor– network theory. Public understanding of science, 19(3), pp. 258–273.

Vie, S. (2008). Digital divide 2.0:“Generation M” and online social networking sites in the composition classroom. Computers and Composition, 25(1), 9-23.

Walsh, L. (2010). Constructive interference: Wikis and service learning in the technical communication classroom. Technical Communication Quarterly, 19(2), 184-211.

Wikipedia. (NA). Five pillars. Retrieved from https://en.wikipedia.org/wiki/Wikipedia:Five_pillars

Wikipedia. (NA). Government. Retrieved from https://en.wikipedia.org/wiki/Wikipedia:Government

Wikipedia. (NA). Notability. Retrieved from https://en.wikipedia.org/wiki/Wikipedia:Notability

Wikipedia. (NA). Verifiability. Retrieved from https://en.wikipedia.org/wiki/Wikipedia:Verifiability#What_counts_as_a_relia ble_source

Williams, A. (2003, October). Examining the use case as genre in software development and documentation. In Proceedings of the 21st annual international conference on Documentation (pp. 12–19). ACM.

Zittrain, J. (2008). The future of the Internet and how to stop it. New Haven, CT: Yale University Press.

191 Texas Tech University, Michael R. Trice, August 2019

APPENDIX A

OUTREACH SURVEY

1. Have you used a LocalWiki site, such as LocalWiki Denton, before? w Yes No 2. If so, how often have you viewed it in the last month? w 0 1-2 3-9 10 or more 3. How often have you edited it in the last month? w 0 1-2 3-9 10 or more 4. How long have you been visiting the site? w Less than a month 1-6 months 6 months to a year Over a year 5. Have you used a wiki before? w Yes No 6. If so, how often have you viewed one in the last three months? w 0 1-2 3-9 10 or more 7. How often have you edited one in the last three months? w 0 1-2 3-9 10 or more 8. What wikis do you visit? w 9. What wikis do you edit? w 10. Have you managed a MySQL databse before? w Yes No 11. 4. Can you program in PHP? w Yes No 12. Have you ever created or administered your own wiki of any type? w Yes

192 Texas Tech University, Michael R. Trice, August 2019

No 13. Would you be willing to participate in a study of 20-30 minutes? w Yes No 14. If so, please provide the best means to contact you (your phone number, email address, Skype ID, etc) w

193 Texas Tech University, Michael R. Trice, August 2019

APPENDIX B

LOCALWIKI DENTON TEST PLAN

Overview

The usability testing will evaluate LocalWiki Denton, an installation of the LocalWiki system, from the perspective of members of the Denton community. The testing will compare the experience across three user profiles to determine how experience levels affect performance within the LocalWiki system.

To do this, 15 users will be tested: 5 new users, 5 experienced users, and 5 design users. These profiles are explained in the User Profile section. Tests will be conducted in native environments for users: public library and home setting. The testing includes pre-test and post-task surveys, task analysis of the tests, and an open question retrospective recall session

The project looks to take tasks that represent the concerns of the LocalWiki Denton community and use those to evaluate performance across experience levels. As such, tasks were designed from a series of interviews conducted by Denton community members as well as the designers of the LocalWiki system.

Researcher Goals

The primary goal of the user testing is to examine how users interact within a wiki system that lacks an organizational body depending on their level of experience. Additionally, the user-testing will be combined with interviews from the community in an attempt to define a community wiki as a genre.

User Profiles

The study seeks users to match three experience profiles: new, experienced, and design user. Users selected will share characteristics that connect them to the Denton community:

Shared profile: • Must live, work, or go to school in Denton, Texas. • Must have previous wiki experience.

New User profile: • Had never used LocalWiki Denton before test • Had wiki experience (reading or editing)

194 Texas Tech University, Michael R. Trice, August 2019

• Lived, worked, or went to school in Denton

Experienced User profile: • Had used LocalWiki Denton before test • Lacked experience with wiki design elements (PHP, MySQL) • Had wiki experience (reading or editing) • Lived, worked, or went to school in Denton

Design User profile: • Had used LocalWiki Denton before test • Possessed experience with wiki design elements (PHP, MySQL) • Had wiki experience (reading or editing) • Lived, worked, or went to school in Denton

In the outreach, preference will be given for women in an attempt to reach gender balance. No age nor ethnicity preferences are sought within the study.

Methodology The testing primarily involves observational data from the six tasks. Morae will be used to create, record, and mark each test. Testing looks to record workflow choices for users and record errors related to interface design, user path choices, and actions leading to incomplete or failed tasks. Fastest path choices will not counted as errors due to the examination of workflow as flexible. However, path choices that lead to dead ends will be counted.

Post-task surveys will be taken. These will help inform user expectation versus performance for each tasks. The surveys are included with this plan.

Metrics will be recorded in Morae. The metrics will be used to inform analysis of the workflow choices for users. Metrics include:

• Time on Task • Clicks • SUS scores • Mouse Movement • Failure Rate • Error Marking as defined above

Retrospective recall will be used in brief to inquire about any oddities during the session. Recalls will be recorded by written notes due to IRB limits.

Testing Environment

195 Texas Tech University, Michael R. Trice, August 2019

Environmental setup includes the use of two laptops. One laptop used by the participant and one acting as observer for the researcher. One class of testing occurred with a library setting with the observer positioned behind the participant. One class occurred with users in their homes using Google Remote to record with Morae. In both cases, the researcher did not speak to the participant at any points during the usability test except to answer yes or no question about testing protocol or to repeat task descriptions. Surveys and Task descriptions were all provided via automated text boxes within Morae.

Testing Protocols

The researcher’s primary role during testing is to facilitate the start of the test and mark issues as they occur in Morae Observer. Prompts are automated through Morae so the researcher could have minimum interaction during the testing. Yes or no questions will be answered about testing protocols and prompts will be repeated upon request.

After the testing is completed, retrospective recall questions will be asked about any unusual activity identified within the testing process.

Scenarios

Usability Testing consists of six tasks.

Task Estimated Survey Estimated Task Estimated Total Time Time Time Find the Hours for 2 2 4 Open Mic Night at The Garage Change the Date 2 2 4 Found on “The Garage” Page Add a Link from 2 2 4 “The Garage” Page to any Other Page Find the 2 2 4 Businesses on The Square Add a New Page 2 2 4 for a Location to the Wiki (Can be a Fake Location) Six: Address Any 2 5 7 Concerns You

196 Texas Tech University, Michael R. Trice, August 2019

Have with the Cranky Goose Page

Evaluation methods The primary goal was to collect data for two purposes: to compare performance between groups based up success rate, types of errors, frequency of error, and time on task. Secondly, to create a workflow for each task and each user group via functions chosen, pages viewed, and adjusted by errors. This analysis was done after the testing process was complete.

Data Storage and Analysis Data from the tests was store in Morae and Excel. Analysis of observational data was done primarily in Morae with metrics evaluated in Excel and SPSS.

197 Texas Tech University, Michael R. Trice, August 2019

APPENDIX C

INTERVIEW QUESTIONS

Novice User Questions

1. What do you feel is the purpose of a local wiki? 2. What would/do you do on a local wiki site? 3. What problems would you expect on a local wiki site? 4. What features do you think are most important on a wiki? a. What features do you think are least important? b. What are your favorite features on any wiki you have used? Why? c. What are your least favorite features? Why? d. How do you collaborate on wiki? e. How do you resolve conflicts? f. What makes a good wiki entry? 5. Do you have any experience with programming or maintaining a database? a. If so, what is your background? b. Have you programmed for a wiki or maintained a wiki database? 6. How would you define your level of experience with a wiki? 7. Do you have anything else to add?

Expert User Questions

1. What do you feel is the purpose of a local wiki? 2. What would/do you do on a local wiki site? 3. What problems would you expect on a local wiki site? 4. What features do you use most on LocalWiki Denton? a. What features do you use least? b. What are your favorite features? Why? c. What are your least favorite features? Why? d. How do you collaborate on LocalWiki Denton? e. How do you resolve conflicts? f. What makes a good LocalWiki Denton entry? 5. How did you find LocalWiki Denton? 6. Do you have any experience with programming or maintaining a database? 7. How would you define your level of experience with a wiki? 8. Do you have anything else to add?

198 Texas Tech University, Michael R. Trice, August 2019

Design User Questions

1. What do you feel is the purpose of a local wiki? 2. What do users do on a local wiki site? 3. What problems would you expect on a local wiki site? 4. What features do users use most on a LocalWiki? a. What features do they use least? b. What are your favorite features? Why? c. What are your least favorite features? Why? d. How do users collaborate on a LocalWiki? e. How do users resolve conflicts? f. What makes a good LocalWiki entry? 5. How do users find LocalWiki Denton? 6. Do you have any experience with programming or maintaining a database? a. If so, what is your background? b. Have you programmed for a wiki or maintained a wiki database? 7. How would you define your level of experience with wiki software? 8. What are the key features of the LocalWiki system? 9. How does LocalWiki help users create a community space? 10. Which features are critical for users to use LocalWiki? 11. Which features do you feel work best? 12. Which need the most improvement? 13. What is the most important aspect of LocalWiki for a user? 14. What is the key to successfully or make a good edit LocalWiki? 15. Do you have anything else to add?

199