<<

w Newspapers Lexile Measure: 1200L

Wall Street Journal Mar 6, 2017, p. B.1

Copyright © 2017 Dow Jones & Company, Inc. Reproduced with permission of copyright owner. Further reproduction or distribution is prohibited without permission. All rights reserved.

What Tech Firms Can Do to Stop Internet Trolling

By Christopher Mims

Admit it: At one point or another, you have probably said something unpleasant online that you later regretted -- and that you wouldn't have said in person. It might have seemed justified, but to someone else, it probably felt inappropriate, egregious or like a personal attack.

In other words, you were a troll.

New research by computer scientists from Stanford and Cornell universities suggests this sort of thing -- a generally reasonable person writing a post or leaving a comment that includes an attack or even outright harassment -- happens all the time. The most likely time for people to turn into trolls? Sunday and Monday nights, from 10 p.m. to 3 a.m.

Trolling is so ingrained in the internet that, without even noticing, we've let it shape our most important communication systems. One reason Facebook provides elaborate privacy controls is so we don't have to wade through drive-by comments on our own lives.

Countless media sites have turned off comments rather than attempt to tame the unruly mob weighing in below articles. Others have invested heavily in moderation, and are now adopting tools like algorithmic filtering from Jigsaw, a division of parent Alphabet Inc., which uses artificial intelligence to determine how toxic comments are. YouTube and Instagram both have similar filtering. (Snapchat is, arguably, built around never letting trolls see or give feedback on your posts in the first place.)

Then there's the one real public commons left on the internet -- , which is in a pitched battle with trolls that, on most days, the company appears to be losing.

But if the systems we use are encouraging us to be nasty, how far can developers go to reverse the trend? Can we ever achieve the giant, raucous but ultimately civil public square that was the promise of the early internet? "It's tempting to believe that all the problems online are due to someone else, some really sociopathic person," says Michael Bernstein, an expert in human-computer interaction at Stanford University, and one of four collaborators on the research. "Actually, we all have to own up to this."

Die-hard internet trolls do exist and may instigate trolling in others, these researchers say. Harassment, stalking, threats of violence, psychological terrorism and serially abusive behavior online are real and must be stopped. But by focusing on the most egregious repeat offenders, internet companies have missed the forest for the trees.

A significant proportion of trolling comes from people who haven't trolled before, the researchers say. To determine this, they analyzed 16 million comments from CNN's website. The researchers defined trolling as swearing, harassment and personal attacks, but emphasize that trolling is defined by the community and differs from one to the next.

One thing that drives people to troll is, unsurprisingly, their mood. Studies have shown that people's moods, as revealed by the tone of their posts on Twitter, follow a remarkably predictable pattern: Relatively positive in the morning, and more negative as the day wears on. You can guess the weekly pattern: Mondays are the worst, and people seem to feel better on the weekend.

There's an almost identical circadian rhythm for trolling, according to the research team.

The researchers also uncovered a pile-on effect. Being trolled in another comment thread, or seeing trolling further up in a thread, makes people more likely to join in.

In a controlled experiment, the Stanford and Cornell researchers established that together, mood and exposure to trolling can make a person twice as likely to troll.

Internet companies don't have much power over our mood swings. But they do control the design of the systems they create. Far from operating neutral "platforms" for online discussion, they can shape the discussions they host.

Luckily, there are solutions that go beyond live human moderators. In February, Twitter rolled out a new feature that, for about a 12-hour period, blocks the tweets of an abusive user, as determined by Twitter's algorithms, from being seen by anyone but the harasser's followers.

Both Facebook and Google have their own systems for flagging abusive behavior and an escalating ladder of punishments for those who commit it. Jigsaw recently rolled out its AI, but a person familiar with the workings of the system said that it remains far from perfect, generating many false positives. Even a perfect "penalty box" approach to comment moderation doesn't go far enough. The internet needs ways to encourage users' empathy and capacity for self-reflection, which are more lasting antidotes to online hostility.

Software from Civil, a Portland, Ore., startup, forces anyone who wants to comment to first evaluate three other comments for their level of civility. Initially, the third comment people are asked to review is their own, which they have the option to revise -- and they often do, according to Civil co-founder Christa Mrgan. In this way, the system accomplishes the neat trick of helping readers see their words as someone else would.

Social networks, publishers and other platforms have an obligation to think not merely about how to cope with online abuse, but about how to elevate the level of the discussions they host. While it's more of a possibility than ever -- especially with the rise of AI -- the question is, how motivated is the tech industry to accomplish it?

Summary

"Trolling is so ingrained in the internet that, without even noticing, we've let it shape our most important communication systems....But if the systems we use are encouraging us to be nasty, how far can developers go to reverse the trend? Can we ever achieve the giant, raucous but ultimately civil public square that was the promise of the early internet?" (Wall Street Journal) The author of this viewpoint article argues that technology companies have an obligation to stop trolling.

Citations

MLA 8

Mims, Christopher. "What Tech Firms can do to Stop Internet Trolling." Wall Street Journal, 06 Mar, 2017, pp. B.1, SIRS Issues Researcher, https://sks.sirs.com.

APA 6

Mims, C. (2017, 06 Mar). What tech firms can do to stop internet trolling. Wall Street Journal Retrieved from https://sks.sirs.com

Related Subjects Artificial intelligence Online chat groups Communication and technology Computer algorithms

Harassment Human behavior Human-computer interaction Internet, Hate speech

Internet, Psychological aspects Internet, Social use Mood (Psychology) Online etiquette Internet users

Digital media Internet companies Internet filtering software Cyberbullying Internet message boards

Online social networks

ProQuest is committed to empowering researchers and librarians around the world. Our portfolio of assets — including content, technologies and deep expertise — drives better research outcomes for users and greater efficiency for the libraries and organizations that serve them.

© Copyright 2018, ProQuest LLC Terms & Conditions