Beyond the Lulz: Black Hat Trolling, White Hat Trolling, and Hacking the Attention Landscape
Ma Goerzen & Jeanna Ma hews Media Manipula on Ini a ve | Data & Society www.datasociety.net INTRO August 1, 1998 @ DEF CON August 12, 2017 @THE INTERNET
First: Mode of governing other users on Usenet
Later: Imageboard users raiding for “lulz” Hack vists drawing media a en on to ac vist causes Networked harassment and abuse (o en w/ poli cal intent) Using controversy to sow chaos and amplify disinforma on Bai ng journalists into repor ng false informa on Manipula ng popular opinion To Discuss & Differen ate… With You
1. Media (news, social) as Security Systems 2. A range of Techniques 3. Possible Goals and Outcomes 4. Various Targets 5. Poten al Vulnerabili es 6. Black Hat, Gray Hat, and White Hat disposi ons 7. Ethics & Threat Models 8. Trolling: A Bug or a Feature? 9. The Role of the Troll? 40+ CASE STUDIES
• Trolling is to socio-technical systems what hacking is to technical systems
• “A en on hacking” to set the media agenda, direc ng journalists, social media users and others to informa on that serves their interest
• New media = new vulnerabili es
• Adver sement & “If it bleeds it leads” = unpatched vulnerabili es • Source Hacking / Journobai ng • Controlled Opposi on
• A en onal Honeypots • Driving a Wedge / Divide & Conquer
• Keyword Squa ng • Concern Trolling
• Capturing the Narra ve • Brigading / Dogpiling
• Cogni ve Denial of Service • Sockpuppetry
• Brute Force Harassment • Jiu Jitsu / Self-Vic miza on
• Ironical Bait & Switch • Doxing
• Gaming Databases / Algorithmic Control • Context Distor on
• Meme c Trojan Horses • Deep Fakes / Photoshops • Troll
• Bait
• Target
• Third-party Interpreter
• Intended outcome • Lulz • Poli cs • Poli cal Maneuvering • Incitement
• Chaos / Scorched Earth • Response seeking
• Percep on Management • Community building
• Denial of Service • Community governance
• Lulz / Drama • Demoraliza on / Outrage
• Agenda Se ng / Priming • Profit??? • Draw a en on to ignored issues (“agenda se ng”)
• Waste a en on and distract from issues
• Direct a en on to pressure en es (“security economics”)
• Draw a telling response (transparency and accountability)
• Deny a response
• Diminish an opponent’s resources or morale – perhaps to exhaus on • Peers
• Journalists (and their readers…)
• Social Media Users (poten al allies)
• Social Media Users (perceived threats)
• Poli cians & Regulators
• Companies and vendors • Clickbait / ”Bleeding Leads” / Breaking News
• Network cura on
• Asymmetric cost to debunk
• Automated amplifica on (bots)
• Adtech
• Context collapse / Poe’s Law / Anonymity
• Push private agenda
• Counter to target’s interest
• Non-prescribed engagement • Non-prescribed engagement
• Sense of public interest
• Demonstrate a ack surface (proof of concept)
• Increase pressure on gatekeepers
• Troll to Reveal? • Reveal vulnerabili es selec vely to pla orms
• Disable possibility of non-prescribed engagement through research and educa on
• Reduce a ack surface
• Align system with intent
• Blue team pla orms… for profit? • Is “I did it for the lulz” sufficient?
• Is “an -” enough?
• Who judges?
• Does it look different from different perspec ves? • Where do we locate the public good?
• What are the threat models?
• Right to free speech with your one mouth?
• Right to amplifica on of that speech?
• Monopoliza on of a en on = DOS of other’s free speech? • Finite life me problem
• Can’t verify/inves gate every claim
• Inflammatory lies s ckier and more profitable • Gatekeeping under counts • No gatekeeping over counts
• Do-it-yourself gatekeeping tends to tribalism? • Old gatekeeping wasn’t working. Rules for sharing our a en on landscape? Long-term goals? Short-term changes?
• Gatekeeping but not too much? Cracks are where the light gets through!
• Pla orm re-design
• Media Literacy
• Trust and Verifica on
• Disrup ng Economic Incen ves
• Banning Accounts
• Regulatory Approaches
• Don’t Feed the Trolls / Strategic Silence • Doxing “bad actor” trolls
• “Vulnerability” disclosure
• Shaming or trolling pla orms and newsrooms to change
• P2P modera on • White lists • Call outs • “Countermemes” or “discursive patches” like Godwin’s Law or “Don’t feed…” • Redirec on / board sliding • Inform interven ons by: • “Security research” / “Penetra on tes ng”
• Informing media companies / pla orms of vulnerabili es? • Iden fying bad actors and their incen ves?
• Intervening directly by: • Pu ng pressure on media companies to change?
• Demonstra ng a ack vectors as “proofs of concept”? • Trolling trolls?