Persuasion by Strategic Obfuscation

Total Page:16

File Type:pdf, Size:1020Kb

Persuasion by Strategic Obfuscation FUNDAÇÃO GETULIO VARGAS ESCOLA DE ECONOMIA DE SÃO PAULO LUIS HENRIQUE RIBEIRO LINHARES PERSUASION BY STRATEGIC OBFUSCATION São Paulo 2021 LUIS HENRIQUE RIBEIRO LINHARES PERSUASION BY STRATEGIC OBFUSCATION Dissertação apresentada à Escola de Econo- mia de São Paulo como pré-requisito à ob- tenção de título de mestre em Economia de Empresas. Orientador: Daniel Monte. São Paulo 2021 Linhares, Luis Henrique Ribeiro. Persuasion by strategic obfuscation / Luis Henrique Ribeiro Linhares. - 2021. 50f. Orientador: Daniel Monte. Dissertação (mestrado CMEE) – Fundação Getulio Vargas, Escola de Economia de São Paulo. 1. Persuasão (Retórica). 2. Teoria dos jogos. 3. Teoria bayesiana de decisão estatística. I. Monte, Daniel. II. Dissertação (mestrado CMEE) – Escola de Economia de São Paulo. III. Fundação Getulio Vargas. IV. Título. CDU 330.115 Ficha Catalográfica elaborada por: Raphael Figueiredo Xavier CRB SP-009987/O Biblioteca Karl A. Boedecker da Fundação Getulio Vargas - SP LUIS HENRIQUE RIBEIRO LINHARES PERSUASION BY STRATEGIC OBFUSCATION Dissertação apresentada à Escola de Econo- mia de São Paulo como pré-requisito à ob- tenção de título de mestre em Economia de Empresas. Data de aprovação: 02/06/2021 Banca examinadora: Prof. Dr. Daniel Monte FGV-EESP (Orientador) Prof. Dr. Henrique Roscoe de Oliveira FGV-EESP Prof. Dr. Lucas Jóver Maestri FGV-EPGE Agradecimentos Agradeço à minha família pelo apoio incondicional que me deram quando decidi mudar de direção na carreira e me jogar na vida do mestrado. Ao meus pais por sempre acreditarem em mim, à minha irmã por me ajudar a não desistir e ao Lucky por estar presente nas horas de estresse. Ao meu orientador, Prof. Daniel Monte, por ter tornado este trabalho possível e por sempre estar presente para me ajudar com as dúvidas e desafios que encontramos. Aos professores Henrique de Oliveira e Lucas Maestri por terem aceitado fazer parte da minha banca e darem conselhos valiosos. Aos professores Braz Camargo e Victor Filipe por terem me ajudado no processo de application. Ao pessoal da salinha. Ficamos juntos por menos tempo que deveríamos, mas vocês me mostraram que fazer o mestrado foi uma das melhores decisões que fiz na vida. Não é todo dia que se encontra um grupo que gosta de discutir modelos econômicos na frente de uma lousa. O presente trabalho foi realizado com apoio da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Código de Financiamento 001. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. Quaisquer falhas e omissões são de minha inteira responsabilidade. Resumo Nós estudamos um problema de persuasão em que um principal escolhe o quão custoso será para o agente obter e processar informações novas. Ofuscação ou transparência serão endógenas ao problema. Especificamente, nós combinamos um framework de inatenção racional com jogos de comunicação e problemas de persuasão bayesiana e mostramos que, sob algumas coindições, o custo ótimo induzido de processar informação é não-monotônico, isto é, o ótimo é confundir "só um pouco". Portanto, nosso modelo ajuda a explicar porque em alguns casos firmas escrevem contratos com tantas cláusulas, enquanto em outros contratos são simples e transparentes. O modelo também ajuda a explicar porque algumas empresas oferecem produtos com muitas características ou características diferentes das de seus concorrentes de modo a dificultar a escolha do produto. Obfuscação tem implicações para o bem-estar que legisladores devem levar em consideração. Palavras-chave: jogos de comunicação, design de informação, persuasão Bayesiana, inatenção racional, obfuscação estratégica. Abstract We study a persuasion problem in which the principal chooses how costly it will be for the agent to obtain and process new information. Obfuscation or transparency will be endogenous to the decision problem at hand. Specifically, we combine a rational inattention framework with communication games and Bayesian persuasion problems and we show that under some conditions the optimal induced cost of processing information is non-monotonic, that is, when it is optimal to confuse “just a little”. Thus, our model helps explain why in some environments firms write terms and conditions with so many clauses, whereas in other contexts, contracts might be simple and transparent. It also helps explain why some companies offer varying features that make choosing a product more difficult than it could be. Obfuscation has clear welfare implications that policy makers should bear in mind. Keywords: communication games, information design, bayesian persuasion, rational inattention. List of Figures Figure 1 – Example of Solution to the Receiver’s Problem.............. 21 Figure 2 – Expected Indirect Utility for the Informer ................ 22 Figure 3 – Expected Indirect Utility for the Receiver ................ 23 Figure 4 – Dynamics of an increase in k ........................ 24 Figure 5 – Expected payoffs for k > kˆ ......................... 24 Figure 6 – What happens when µ0 = µindf ...................... 26 Figure 7 – Payoffs as a function of beliefs for the Customer............. 29 Figure 8 – Payoffs as a function of k for the Customer................ 29 Figure 9 – The function the Receiver Maximizes................... 30 Figure 10 – The maximizing function and its concavication for the Customer for k = 1 and µ0 = 0.75............................. 31 Figure 11 – Customer’s maximizing function and solution. Left: Case for k = 0.5. Right: Case for k = 2. ........................... 31 Figure 12 – Producer’s indirect utility and the chosen split for k = 1. 32 Figure 13 – Producer’s indirect utility from k...................... 32 Figure 14 – Producer’s indirect utility from k for the new case in which c = 2. 33 Figure 15 – Payoffs as a function of beliefs for COM ................. 35 Figure 16 – Payoffs as a function of beliefs for MON ................. 36 Figure 17 – COM maximization function for k = 0.2 . 37 Figure 18 – MON expected payoffs from the split of posteriors induced by k = 0.2. 37 ˜ Figure 19 – Plot of V0.7.................................. 38 Figure 20 – Indirect utility for Informer with µ0 = 0.25 . 40 Figure 21 – Illustration of auxiliary variables...................... 47 Figure 22 – Example of a concave function....................... 49 List of Tables Table 1 – Payoffs table for the Better or Worse Product game............ 28 Table 2 – Payoffs table with numerical values. .................... 28 Table 3 – Payoffs table with new value of c. ..................... 32 Table 4 – Payoffs table for the Monopolist game.................... 35 Table 5 – Payoffs table for the Monopolist game with values for parameters. 35 Contents 1 INTRODUCTION............................ 10 2 RELATED LITERATURE........................ 14 3 THE MODEL .............................. 17 3.1 The Receiver’s Problem .......................... 19 3.2 The Informer’s Problem .......................... 22 4 PRACTICAL EXAMPLES........................ 27 4.1 Better or Worse Product ......................... 27 4.2 Monopoly .................................. 34 5 CLASSIFYING THE INFORMER’S PROBLEM ........... 39 6 CONCLUSIONS............................. 42 Bibliography ............................... 43 APPENDIX A – AUXILIARY PROOFS................ 47 10 1 Introduction During World War II, the 23rd Headquarters Special Troops was an American military unit with a different purpose than most: deception. A well kept secret until the 90’s, this "Ghost Unit" sowed confusion amongst the Axis troops through the use of inflatable tanks, fake radio messages and all sorts of clever tricks. Such diversionary tactics have been a staple of warfare for a long time, - even before Greeks pretended to go home and left a horse as a present for the Trojans - but this sort of behaviour is also more pervasive in our daily lives than we might think: opposing lawyers, when required to disclose evidence, may send not solely the pertinent documents, but many more, to make it harder to discern what is relevant and what is not; printers can be sold at a loss to attract consumers, while disguising ink sales that come with huge margins; and let he who has never blindingly agreed to those long terms and conditions cast the first stone. More recently, new research has shown that obfuscating (“making things less clear”) behaviour arises even in more traditional economic contexts. Contrary to what would be expected in a competitive market, papers document evidence of heterogeneous prices for similar products as firms engage in obfuscation to artificially create a difference in the eyes of the consumer. The mechanisms are numerous: by offering different product sizes and flavors (Richards et al., 2019), by making it more difficult to compare prices online (Ellison and Ellinson, 2009), or by hiding the price of add-ons (Ellison, 2005) are just a few examples. Other studies lay out how obfuscation may be sustained: Gabaix and Laibson(2006) illustrates cases in which firms don’t have an incentive to educate consumers about their competitors, and Jin et al.(2018) comments on how even companies with good products and services may try to confuse consumers as this induces systematic mistakes that they may exploit. No matter the method, agents seem to prey on customers’ inattention as if they were in opposing sides on a battlefield. On the other hand, there are situations in which agents are in agreement. A benign doctor wants their patient to choose the best possible
Recommended publications
  • BPGC at Semeval-2020 Task 11: Propaganda Detection in News Articles with Multi-Granularity Knowledge Sharing and Linguistic Features Based Ensemble Learning
    BPGC at SemEval-2020 Task 11: Propaganda Detection in News Articles with Multi-Granularity Knowledge Sharing and Linguistic Features based Ensemble Learning Rajaswa Patil1 and Somesh Singh2 and Swati Agarwal2 1Department of Electrical & Electronics Engineering 2Department of Computer Science & Information Systems BITS Pilani K. K. Birla Goa Campus, India ff20170334, f20180175, [email protected] Abstract Propaganda spreads the ideology and beliefs of like-minded people, brainwashing their audiences, and sometimes leading to violence. SemEval 2020 Task-11 aims to design automated systems for news propaganda detection. Task-11 consists of two sub-tasks, namely, Span Identification - given any news article, the system tags those specific fragments which contain at least one propaganda technique and Technique Classification - correctly classify a given propagandist statement amongst 14 propaganda techniques. For sub-task 1, we use contextual embeddings extracted from pre-trained transformer models to represent the text data at various granularities and propose a multi-granularity knowledge sharing approach. For sub-task 2, we use an ensemble of BERT and logistic regression classifiers with linguistic features. Our results reveal that the linguistic features are the reliable indicators for covering minority classes in a highly imbalanced dataset. 1 Introduction Propaganda is biased information that deliberately propagates a particular ideology or political orientation (Aggarwal and Sadana, 2019). Propaganda aims to influence the public’s mentality and emotions, targeting their reciprocation due to their personal beliefs (Jowett and O’donnell, 2018). News propaganda is a sub-type of propaganda that manipulates lies, semi-truths, and rumors in the disguise of credible news (Bakir et al., 2019).
    [Show full text]
  • Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People’S Republic of China
    Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People’s Republic of China Rong-Ching Chang Chun-Ming Lai [email protected] [email protected] Tunghai University Tunghai University Taiwan Taiwan Kai-Lai Chang Chu-Hsing Lin [email protected] [email protected] Tunghai University Tunghai University Taiwan Taiwan ABSTRACT ACM Reference Format: The digital media, identified as computational propaganda provides Rong-Ching Chang, Chun-Ming Lai, Kai-Lai Chang, and Chu-Hsing Lin. a pathway for propaganda to expand its reach without limit. State- 2021. Dataset of Propaganda Techniques of the State-Sponsored Informa- tion Operation of the People’s Republic of China. In KDD ’21: The Sec- backed propaganda aims to shape the audiences’ cognition toward ond International MIS2 Workshop: Misinformation and Misbehavior Min- entities in favor of a certain political party or authority. Further- ing on the Web, Aug 15, 2021, Virtual. ACM, New York, NY, USA, 5 pages. more, it has become part of modern information warfare used in https://doi.org/10.1145/nnnnnnn.nnnnnnn order to gain an advantage over opponents. Most of the current studies focus on using machine learning, quantitative, and qualitative methods to distinguish if a certain piece 1 INTRODUCTION of information on social media is propaganda. Mainly conducted Propaganda has the purpose of framing and influencing opinions. on English content, but very little research addresses Chinese Man- With the rise of the internet and social media, propaganda has darin content. From propaganda detection, we want to go one step adopted a powerful tool for its unlimited reach, as well as multiple further to providing more fine-grained information on propaganda forms of content that can further drive engagement online and techniques that are applied.
    [Show full text]
  • Skypemorph: Protocol Obfuscation for Tor Bridges
    SkypeMorph: Protocol Obfuscation for Tor Bridges Hooman Mohajeri Moghaddam Baiyu Li Mohammad Derakhshani Ian Goldberg Cheriton School of Computer Science University of Waterloo Waterloo, ON, Canada {hmohajer,b5li,mderakhs,iang}@cs.uwaterloo.ca ABSTRACT The Tor network is designed to provide users with low- latency anonymous communications. Tor clients build cir- cuits with publicly listed relays to anonymously reach their destinations. However, since the relays are publicly listed, they can be easily blocked by censoring adversaries. Con- sequently, the Tor project envisioned the possibility of un- listed entry points to the Tor network, commonly known as bridges. We address the issue of preventing censors from detecting the bridges by observing the communications be- tween them and nodes in their network. We propose a model in which the client obfuscates its mes- sages to the bridge in a widely used protocol over the Inter- net. We investigate using Skype video calls as our target protocol and our goal is to make it difficult for the censor- Figure 1: This graph from metrics.torproject.org shows the ing adversary to distinguish between the obfuscated bridge number of users directly connecting to the Tor network from connections and actual Skype calls using statistical compar- China, from mid-2009 to the present. It shows that, after isons. 2010, the Tor network has been almost completely blocked We have implemented our model as a proof-of-concept from clients in China who do not use bridges. [41] pluggable transport for Tor, which is available under an open-source licence. Using this implementation we observed the obfuscated bridge communications and compared it with attempts to block access to Tor by regional and state-level those of Skype calls and presented the results.
    [Show full text]
  • Global Manipulation by Local Obfuscation ∗
    Global Manipulation by Local Obfuscation ∗ Fei Liy Yangbo Songz Mofei Zhao§ August 26, 2020 Abstract We study information design in a regime change context. A continuum of agents simultaneously choose whether to attack the current regime and will suc- ceed if and only if the mass of attackers outweighs the regime’s strength. A de- signer manipulates information about the regime’s strength to maintain the status quo. The optimal information structure exhibits local obfuscation, some agents receive a signal matching the true strength of the status quo, and others receive an elevated signal professing slightly higher strength. Public signals are strictly sub- optimal, and in some cases where public signals become futile, local obfuscation guarantees the collapse of agents’ coordination. Keywords: Bayesian persuasion, coordination, information design, obfuscation, regime change JEL Classification: C7, D7, D8. ∗We thank Yu Awaya, Arjada Bardhi, Gary Biglaiser, Daniel Bernhardt, James Best, Jimmy Chan, Yi-Chun Chen, Liang Dai, Toomas Hinnosaar, Tetsuya Hoshino, Ju Hu, Yunzhi Hu, Chong Huang, Nicolas Inostroza, Kyungmin (Teddy) Kim, Qingmin Liu, George Mailath, Laurent Mathevet, Stephen Morris, Xiaosheng Mu, Peter Norman, Mallesh Pai, Alessandro Pavan, Jacopo Perego, Xianwen Shi, Joel Sobel, Satoru Takahashi, Ina Taneva, Can Tian, Kyle Woodward, Xi Weng, Ming Yang, Jidong Zhou, and Zhen Zhou for comments. yDepartment of Economics, University of North Carolina, Chapel Hill. Email: [email protected]. zSchool of Management and Economics, The Chinese University of Hong Kong, Shenzhen. Email: [email protected]. §School of Economics and Management, Beihang University. Email: [email protected]. 1 1 Introduction The revolution of information and communication technology raises growing con- cerns about digital authoritarianism.1 Despite their tremendous effort to establish in- formation censorship and to spread disinformation, full manipulation of information remains outside autocrats’ grasp in the modern age.
    [Show full text]
  • Improving Meek with Adversarial Techniques
    Improving Meek With Adversarial Techniques Steven R. Sheffey Ferrol Aderholdt Middle Tennessee State University Middle Tennessee State University Abstract a permitted host [10]. This is achieved by manipulating the Host header of the underlying encrypted HTTP payload in As the internet becomes increasingly crucial to distributing in- order to take advantage of cloud hosting services that forward formation, internet censorship has become more pervasive and HTTP traffic based on this header. Domain fronting exploits advanced. Tor aims to circumvent censorship, but adversaries censors’ unwillingness to cause collateral damage, as block- are capable of identifying and blocking access to Tor. Meek, ing domain fronting would require also blocking the typically a traffic obfuscation method, protects Tor users from censor- more reputable, permitted host. From the point of view of ship by hiding traffic to the Tor network inside an HTTPS an adversary using DPI and metadata-based filtering, there is connection to a permitted host. However, machine learning no difference between Meek and normal HTTPS traffic. All attacks using side-channel information against Meek pose unencrypted fields in Meek traffic that could indicate its true a significant threat to its ability to obfuscate traffic. In this destination such as DNS requests, IP addresses, and the Server work, we develop a method to efficiently gather reproducible Name Indication (SNI) inside HTTPS connection headers do packet captures from both normal HTTPS and Meek traffic. not reveal its true destination. We then aggregate statistical signatures from these packet However, recent work has shown that Meek is vulnerable captures. Finally, we train a generative adversarial network to machine learning attacks that use side-channel informa- (GAN) to minimally modify statistical signatures in a way tion such as packet size and timing distributions to differenti- that hinders classification.
    [Show full text]
  • The Production and Circulation of Propaganda
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by RMIT Research Repository The Only Language They Understand: The Production and Circulation of Propaganda A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy Christian Tatman BA (Journalism) Hons RMIT University School of Media and Communication College of Design and Social Context RMIT University February 2013 Declaration I Christian Tatman certify that except where due acknowledgement has been made, the work is that of the author alone; the work has not been submitted previously, in whole or in part, to qualify for any other academic award; the content of the thesis is the result of work which has been carried out since the official commencement date of the approved research program; any editorial work, paid or unpaid, carried out by a third party is acknowledged; and, ethics procedures and guidelines have been followed. Christian Tatman February 2013 i Acknowledgements I would particularly like to thank my supervisors, Dr Peter Williams and Associate Professor Cathy Greenfield, who along with Dr Linda Daley, have provided invaluable feedback, support and advice during this research project. Dr Judy Maxwell and members of RMIT’s Research Writing Group helped sharpen my writing skills enormously. Dr Maxwell’s advice and the supportive nature of the group gave me the confidence to push on with the project. Professor Matthew Ricketson (University of Canberra), Dr Michael Kennedy (Mornington Peninsula Shire) and Dr Harriet Speed (Victoria University) deserve thanks for their encouragement. My wife, Karen, and children Bethany-Kate and Hugh, have been remarkably patient, understanding and supportive during the time it has taken me to complete the project and deserve my heartfelt thanks.
    [Show full text]
  • Machine Learning Model That Successfully Detects Russian Trolls 24 3
    Human–machine detection of online-based malign information William Marcellino, Kate Cox, Katerina Galai, Linda Slapakova, Amber Jaycocks, Ruth Harris For more information on this publication, visit www.rand.org/t/RRA519-1 Published by the RAND Corporation, Santa Monica, Calif., and Cambridge, UK © Copyright 2020 RAND Corporation R® is a registered trademark. RAND Europe is a not-for-profit research organisation that helps to improve policy and decision making through research and analysis. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors. Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions. Support RAND Make a tax-deductible charitable contribution at www.rand.org/giving/contribute www.rand.org www.randeurope.org III Preface This report is the final output of a study proof-of-concept machine detection in a known commissioned by the UK Ministry of Defence’s troll database (Task B) and tradecraft analysis (MOD) Defence Science and Technology of Russian malign information operations Laboratory (Dstl) via its Defence and Security against left- and right-wing publics (Task C). Accelerator (DASA).
    [Show full text]
  • Propaganda Explained
    Propaganda Explained Definitions of Propaganda, (and one example in a court case) revised 10/21/14 created by Dale Boozer (reprinted here by permission) The following terms are defined to a student of the information age understand the various elements of an overall topic that is loosely called “Propaganda”. While some think “propaganda” is just a government thing, it is also widely used by many other entities, such as corporations, and particularly by attorney’s arguing cases before judges and specifically before juries. Anytime someone is trying to persuade one person, or a group of people, to think something or do something it could be call propagandaif certain elements are present. Of course the term “spin” was recently created to define the act of taking a set of facts and distorting them (or rearranging them) to cover mistakes or shortfalls of people in public view (or even in private situations such as being late for work). But “spin” and “disinformation” are new words referring to an old, foundational, concept known as propaganda. In the definitions below, borrowed from many sources, I have tried to craft the explanations to fit more than just a government trying to sell its people on something, or a political party trying to recruit contributors or voters. When you read these you will see the techniques are more universal. (Source, Wikipedia, and other sources, with modification). Ad hominem A Latin phrase that has come to mean attacking one’s opponent, as opposed to attacking their arguments. i.e. a personal attack to diffuse an argument for which you have no appropriate answer.
    [Show full text]
  • WEAPONS of MASS DISTRACTION: Foreign State-Sponsored Disinformation in the Digital Age
    WEAPONS OF MASS DISTRACTION: Foreign State-Sponsored Disinformation in the Digital Age MARCH 2019 PARK ADVISORS | Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age Authored by Christina Nemr and William Gangware Acknowledgements The authors are grateful to the following subject matter experts who provided input on early drafts of select excerpts: Dr. Drew Conway, Dr. Arie Kruglanski, Sean Murphy, Dr. Alina Polyakova, and Katerina Sedova. The authors also appreciate the contributions to this paper by Andrew Rothgaber and Brendan O’Donoghue of Park Advisors, as well as the editorial assistance provided by Rhonda Shore and Ryan Jacobs. This report was produced with support from the US Department of State’s Global Engagement Center. Any views expressed in this report are those of the authors and do not necessarily reflect the views of the US State Department, Park Advisors, or its subject matter expert consultants. Any errors contained in this report are the authors’ alone. PARK ADVISORS | Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age 0. Table of Contents 01 Introduction and contextual analysis 04 How do we define disinformation? 06 What psychological factors drive vulnerabilities to disinformation and propaganda? 14 A look at foreign state-sponsored disinformation and propaganda 26 Platform-specific challenges and efforts to counter disinformation 39 Knowledge gaps and future technology challenges PARK ADVISORS | Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age 1 Introduction and 1. contextual analysis On July 12, 2014, viewers of Russia’s main state-run television station, Channel One, were shown a horrific story.
    [Show full text]
  • It's Only Wrong If It's Transactional: Moral Perceptions of Obfuscated
    ASRXXX10.1177/0003122418806284American Sociological ReviewSchilke and Rossman 8062842018 American Sociological Review 1 –29 It’s Only Wrong If It’s © American Sociological Association 2018 https://doi.org/10.1177/0003122418806284DOI: 10.1177/0003122418806284 Transactional: Moral journals.sagepub.com/home/asr Perceptions of Obfuscated Exchange Oliver Schilkea and Gabriel Rossmanb Abstract A wide class of economic exchanges, such as bribery and compensated adoption, are considered morally disreputable precisely because they are seen as economic exchanges. However, parties to these exchanges can structurally obfuscate them by arranging the transfers so as to obscure that a disreputable exchange is occurring at all. In this article, we propose that four obfuscation structures—bundling, brokerage, gift exchange, and pawning—will decrease the moral opprobrium of external audiences by (1) masking intentionality, (2) reducing the explicitness of the reciprocity, and (3) making the exchange appear to be a type of common practice. We report the results from four experiments assessing participants’ moral reactions to scenarios that describe either an appropriate exchange, a quid pro quo disreputable exchange, or various forms of obfuscated exchange. In support of our hypotheses, results show that structural obfuscation effectively mitigates audiences’ moral offense at disreputable exchanges and that the effects are substantially mediated by perceived attributional opacity, transactionalism, and collective validity. Keywords obfuscation, economic
    [Show full text]
  • HOP: Hardware Makes Obfuscation Practical∗
    HOP: Hardware makes Obfuscation Practical∗ Kartik Nayak1, Christopher W. Fletcher2, Ling Ren3, Nishanth Chandran4, Satya Lokam4, Elaine Shi5, and Vipul Goyal4 1UMD { [email protected] 2UIUC { [email protected] 3MIT { [email protected] 4Microsoft Research, India { fnichandr, satya, [email protected] 5Cornell University { [email protected] Abstract Program obfuscation is a central primitive in cryptography, and has important real-world applications in protecting software from IP theft. However, well known results from the cryptographic literature have shown that software only virtual black box (VBB) obfuscation of general programs is impossible. In this paper we propose HOP, a system (with matching theoretic analysis) that achieves simulation-secure obfuscation for RAM programs, using secure hardware to circumvent previous impossibility results. To the best of our knowledge, HOP is the first implementation of a provably secure VBB obfuscation scheme in any model under any assumptions. HOP trusts only a hardware single-chip processor. We present a theoretical model for our complete hardware design and prove its security in the UC framework. Our goal is both provable security and practicality. To this end, our theoretic analysis accounts for all optimizations used in our practical design, including the use of a hardware Oblivious RAM (ORAM), hardware scratchpad memories, instruction scheduling techniques and context switching. We then detail a prototype hardware implementation of HOP. The complete design requires 72% of the area of a V7485t Field Programmable Gate Array (FPGA) chip. Evaluated on a variety of benchmarks, HOP achieves an overhead of 8× ∼ 76× relative to an insecure system. Compared to all prior (not implemented) work that strives to achieve obfuscation, HOP improves performance by more than three orders of magnitude.
    [Show full text]
  • Dynamic Deception∗
    Dynamic Deception∗ Axel Anderson Lones Smith† Georgetown Michigan September 15, 2009 Abstract We model dynamic deception exercises in Ponzi schemes, war, terrorism, industrial/political espionage, and insider trading. We introduce a new class of competitive continuous time games with an informed player having an unbounded action intensity, whose actions are partially observed by a rival, or sequence of rivals. We find extremely strong square-root logarithm decreasing returns to deception. We provide economic formulations and analyses of espionage, disinformation, and obfuscation. Notably, we establish a globally concave value of information (espionage). Disinformation is always substantial, and pushes confidence in the wrong state above 86%. Finally, obfuscation rises as the public grows more uncertain. ∗This paper was begun when Axel was visiting the University of Michigan. We thank seminar participants at Texas, and especially Max Stinchcombe. Earlier less focused versions of this paper were presented at GAMES 2008, Decentralization (2008), Stoneybrook (2008), George Washington, Georgetown, Michigan, Toronto, and Western Ontario. †Lones thanks the NSF for research funding. 1 Introduction The most valuable commodity I know of is ‘information’. — Gordon Gekko (“Wall Street”, 1987) Deception is in the air: Our inboxes are swamped with spam Nigerian email schemes or other phishing emails that generate $20 billion a year. Political subterfuge and cover- ups persist around the world — like Iran’s falsified election outcome. On Wall Street, multi-billion
    [Show full text]