RIGHT to TYPE How the “Duty of Care” Model Lacks Evidence and Will Damage Free Speech Introduction: the Duty of Care Puts a Fundamental Freedom at Risk

Total Page:16

File Type:pdf, Size:1020Kb

RIGHT to TYPE How the “Duty of Care” Model Lacks Evidence and Will Damage Free Speech Introduction: the Duty of Care Puts a Fundamental Freedom at Risk RIGHT TO TYPE How the “Duty of Care” model lacks evidence and will damage free speech Introduction: The Duty of Care puts a fundamental freedom at risk Since the abolition of press licensing in 1695, newspapers in England have been free to print without direct government interference. This principle that the written and spoken word should not be subject to specific censorship by government, rather that a series of laws (from defamation to public order) should apply equally to all, has been the underpinning of our right to free speech. This freedom is in peril. The academic concept that lies behind the government’s draft Online Safety Bill will erode this freedom. With little debate, an abstract concept, the “Duty of Care” has become central to civil service thinking about freedom of expression online. The “Duty of Care” applies notions best applied to health and safety law in the workplace to freedom of speech online. It will reverse the famous maxim, “published and be damned”, to become, “consider the consequences of all speech, or be damned”. It marks a reversal of the burden of proof for free speech that has been a concept in the common law of our country for centuries. Worse of all, this academic concept is about to be applied to the new frontier of free speech, the internet, with little debate or scrutiny through the draft Online Safety Bill. This Bill will fundamentally shift who controls free speech in the UK, from Parliament, to Silicon Valley companies who will be responsible for moderating and censoring content which is, in law, perfectly legal but deemed ‘harmful’. ‘Legal but harmful’ has been defined in the draft Bill as causing “physical or psychological harm”, but how can this be proved? This definition opens up significant problems of subjectivity. The reason, in law, we do not use this definition for public order offenses is that it is hard for citizens to understand how their words (written or spoken) could cause psychological harm in advance, especially on the internet where we do not know our audience in advance. It will be up to Silicon Valley companies to decide the type of free speech that is ‘harmful’ and censor it accordingly. Yet, the notion of ‘legal but harmful’ opens up more questions than it answers. What if my definition of harmful differs from yours? How can Silicon Valley executives adjudicate on issues of taste, or offence, without an understanding of the context of British culture, where comedy and the arts often intend to provoke? 2 Censoring content that is ‘legal but harmful’, is not lawful. It breaches fundamental principles in the common law and our human rights framework. The intention seems to be to sidestep Parliament. Even more problematic is the role of Ofcom as the final adjudicator. For centuries, we’ve enjoyed a judicial not technical process for adjudication over matters of free speech. If you write something that is illegal, you can be held accountable for it. Instead, the draft Online Safety Bill will give Ofcom the powers to fine technology companies for allowing content to be posted that is perfectly legal, but deemed harmful by the subjective standards of the regulator. Ofcom is not entirely shielded from political interference, it is not a flight of fancy to imagine a scenario in which Ofcom could be subject to a push from government to clamp down on free speech that offends the government of the day. The fines that Ofcom can levy are eye-watering, with the potential fines as high as 10% of turnover. For Facebook or Google that would be $8.5 billion1 and $18.2 billion2 respectively. There will be a commercial incentive to over-censor, to remove content once deemed as perfectly acceptable, as to defend free speech online could cause significant financial risk. Ultimately, there is a simple solution that fixes the draft Online Safety Bill: focusing the Bill on content which is illegal under laws passed by parliament. By picking up the policy solutions proposed by the Centre for Policy Studies ‘Safety without Censorship’ report, the government can deliver on its intention to ensure that the most harmful content is removed in a timely manner.3 There is an established legal framework to deal with the majority of the harms considered by the major advocates for the draft Online Safety Bill: from grossly offensive content, to content that promotes hate crimes and terrorism, to online sexual abuse. The application of the laws can be delivered with amendments that limit the scope of the legislation to content that is illegal. By keeping the scope, we can protect free speech, while delivering on the good intentions of the legislation. But, as it stands, the draft Online Safety Bill is too broad, too sweeping and will harm freedom of speech that we have fought so hard as a society to protect. 1 Facebook, ‘Fourth Quarter and Full Year 2020 Results’ (January 2021), https://www.prnewswire.com/news-releases/ facebook-reports-fourth-quarter-and-full-year-2020-results-301216628.html accessed 17 June 2021. 2 Alphabet, ‘Alphabet Announces Fourth Quarter and Fiscal Year 2020 Results’ (February 2021, https://abc.xyz/investor/ static/pdf/2020Q4_alphabet_earnings_release.pdf?cache=9e991fd accessed 17 June 2021. 3 Centre for Policy Studies, ‘Safety Without Censorship: A Better Way To Tackle Online Harms’ (September 2020), https:// www.cps.org.uk/files/reports/original/200926202055-SafetywithoutCensorshipCPSReport.pdf accessed 17 June 2021. 3 Executive Summary • The Duty of Care is a flawed concept based on limited evidence, which would create a radically new framework for the proactive censorship of entirely legal content. • The Duty of Care model’s ‘precautionary principle’ shifts the balance from punishing individuals for breaking the law to state sanctioned private censorship of content for which there has been no judgement whether it is legal or illegal. This shift, as enacted through the Online Safety Bill, would mark the most significant change in the role of the state over free speech since 1695. • The Duty of Care model embedded in the Online Safety Bill will not just impact on the right to publish, but also on the right of UK citizens to receive ideas and information. Previously, the state has had a limited role in determining the content that citizens can receive, with prosecution for content deemed illegal happening after the content has been seen by citizens, Online Harms will give the state the powers to enforce immediate censorship of content that is deemed harmful. • Ultimately, there is no strong evidence base to justify the algorithmic censorship of content which under the common law is perfectly legal. We believe that if it is legal to say or print, you should be free to type it. • There is a significant evidence base that the use of algorithmic censorship will disproportionately impact upon marginalised groups, often the groups of people for whom the ability to publish online is most critical as they have few other platforms. • There are a wide range of existing laws to deal with harmful content, the issue is that there is currently too little police resourcing to deal with the prosecution of illegal content. Where an evidence base exists that there are gaps in the existing legal framework, Parliament should pass or amend laws to deal with these gaps. • The Online Safety Bill will shift decision making over freedom of expression online from Parliament to private Silicon Valley companies, who will not risk fines as high as 10% of their turnover to defend free speech for ordinary British citizens. The risk of over-censorship is significant. • The Online Safety Bill carve outs for journalism and politicians will create two tiers of free speech in the UK. The carve out for journalism is limited in scope and will not prevent the blocking of opinion or investigative journalism by algorithms that deem the content potentially in breach of the Ofcom the content potentially in breach of the Ofcom guidelines. In practice, UK newspapers would need to take heed of Ofcom guidance, or see their content removed from social media platforms. 4 Why will the draft Online Safety Bill and the ‘duty of care’ model erode freedom of speech? 1. The Bill creates a two tier system for freedom of speech that gives greater freedom to politicians and journalists. The two-tier system for content moderation censors the masses, and privileges the elites. The Bill contains specific exemptions for journalistic content and content of ‘democratic importance’ essentially privileging the speech of a select few over the British public. However, even on these terms, the Bill fails to protect online newspapers adequately and the notion that free speech should be treated differently because of who is exercising the right of free speech, is problematic. The Bill attempts to give politicians and journalists additional protections for their free speech. Yet, the ability of ordinary people to speak their mind online has brought about many positive changes such as the #MeToo movement. The Bill could lead to companies removing people’s own experiences as they could be considered ‘harmful’. The voices of ordinary British people are not any less important than those of a select few. Not only are the carve outs unfair, they open up a whole host of other problems. The relevant Minister will have the ability to change the parameters of the carve out at will without consulting Parliament. For example, decisions taken during the Covid-19 pandemic clearly demonstrate the pitfalls of allowing Ministers to take decisions unilaterally. The exemptions for journalism fail to protect the dissemination of content by trusted online news sources through social media networks.
Recommended publications
  • Cyber Violence Against Women and Girls
    CYBER VIOLENCE AGAINST WOMEN AND GIRLS A WORLD-WIDE WAKE-UP CALL 2015 Photo credits:Shutterstock A REPORT BY THE UN BROADBAND COMMISSION FOR DIGITAL DEVELOPMENT WORKING GROUP ON BROADBAND AND GENDER CYBER VIOLENCE AGAINST WOMEN AND GIRLS: A WORLD-WIDE WAKE-UP CALL Acknowledgements This Report has been written collaboratively, drawing on insights and rich contributions from a range of Commissioners and Expert Members of the Working Group on Broadband and Gender. It has been researched and compiled by lead author Nidhi Tandon, assisted by Shannon Pritchard, with editorial inputs by teams from UN Women, UNDP and ITU. Design concepts were developed by Céline Desthomas of ITU. We wish to thank the following people for their contributions and kind review and comments (listed in alphabetical order of institution, followed by alphabetical order of surname): Dafne Sabanes Plou, Jac sm Kee and Chat Garcia Ramilo (APC); Dr Nancy Hafkin; Minerva Novero- Belec (UNDP); Corat Suniye Gulser (UNESCO); Jennifer Breslin and team (UN Women); Samia Melhem and team (World Bank). About the Commission The Broadband Commission for Digital Development was launched by the International Telecommunication Union (ITU) and the United Nations Educational, Scientific and Cultural Organization (UNESCO) in response to UN Secretary-General Ban Ki-moon’s call to step up efforts to meet the Millennium Development Goals. Established in May 2010, the Commission unites top industry executives with government leaders, thought leaders and policy pioneers and international agencies and organizations concerned with development. The Broadband Commission embraces a range of different perspectives in a multi-stakeholder approach to promoting the roll-out of broadband, as well as providing a fresh approach to UN and business engagement.
    [Show full text]
  • Self-Censorship and the First Amendment Robert A
    Notre Dame Journal of Law, Ethics & Public Policy Volume 25 Article 2 Issue 1 Symposium on Censorship & the Media 1-1-2012 Self-Censorship and the First Amendment Robert A. Sedler Follow this and additional works at: http://scholarship.law.nd.edu/ndjlepp Recommended Citation Robert A. Sedler, Self-Censorship and the First Amendment, 25 Notre Dame J.L. Ethics & Pub. Pol'y 13 (2012). Available at: http://scholarship.law.nd.edu/ndjlepp/vol25/iss1/2 This Article is brought to you for free and open access by the Notre Dame Journal of Law, Ethics & Public Policy at NDLScholarship. It has been accepted for inclusion in Notre Dame Journal of Law, Ethics & Public Policy by an authorized administrator of NDLScholarship. For more information, please contact [email protected]. ARTICLES SELF-CENSORSHIP AND THE FIRST AMENDMENT ROBERT A. SEDLER* I. INTRODUCTION Self-censorship refers to the decision by an individual or group to refrain from speaking and to the decision by a media organization to refrain from publishing information. Whenever an individual or group or the media engages in self-censorship, the values of the First Amendment are compromised, because the public is denied information or ideas.' It should not be sur- prising, therefore, that the principles, doctrines, and precedents of what I refer to as "the law of the First Amendment"' are designed to prevent self-censorship premised on fear of govern- mental sanctions against expression. This fear-induced self-cen- sorship will here be called "self-censorship bad." At the same time, the First Amendment also values and pro- tects a right to silence.
    [Show full text]
  • Staying Safe Online: Gender and Safety on the Internet
    MEDITERRANEAN INSTITUTE OF GENDER STUDIES (MIGS) Staying Safe Online: Gender and Safety on the Internet Experiences of Young Women and Men in Cyprus Nicosia, Cyprus November 2014 Staying Safe Online: Gender and Safety on the Internet © 2014, Mediterranean Institute of Gender Studies, all rights reserved. 46 Makedonitissas Ave. P.O. Box 24005, Nicosia 1703 Cyprus Authors: Elena Rousou, Christina Kaili Edited by Susana Elisa Pavlou Published by the Mediterranean Institute of Gender Studies (M.I.G.S.) on December 2014 Coordinated by Gender Studies, o.p.s., Czech Republic Partner Organisations Feminoteka , Poland ProFem, Czech Republic Mediterranean Institute of Gender Studies (M.I.G.S.), Cyprus This publication has been produced with the financial support of the DAPHNE III Programme of the European Union. The contents of this publication are the sole responsibility of the Mediterranean Institute of Gender Studies (M.I.G.S.) and can in no way be taken to reflect the views of the European Commission. 1 Table of Contents 1. INTRODUCTION ........................................................................................................ 4 1.1 SCOPE, GOALS AND OBJECTIVES ....................................................................................... 4 1.2 METHODOLOGY ............................................................................................................. 4 1.3 DEFINITIONS ................................................................................................................. 6 2. THEORETICAL FRAMEWORK
    [Show full text]
  • The Impact of Disinformation on Democratic Processes and Human Rights in the World
    STUDY Requested by the DROI subcommittee The impact of disinformation on democratic processes and human rights in the world @Adobe Stock Authors: Carme COLOMINA, Héctor SÁNCHEZ MARGALEF, Richard YOUNGS European Parliament coordinator: Policy Department for External Relations EN Directorate General for External Policies of the Union PE 653.635 - April 2021 DIRECTORATE-GENERAL FOR EXTERNAL POLICIES POLICY DEPARTMENT STUDY The impact of disinformation on democratic processes and human rights in the world ABSTRACT Around the world, disinformation is spreading and becoming a more complex phenomenon based on emerging techniques of deception. Disinformation undermines human rights and many elements of good quality democracy; but counter-disinformation measures can also have a prejudicial impact on human rights and democracy. COVID-19 compounds both these dynamics and has unleashed more intense waves of disinformation, allied to human rights and democracy setbacks. Effective responses to disinformation are needed at multiple levels, including formal laws and regulations, corporate measures and civil society action. While the EU has begun to tackle disinformation in its external actions, it has scope to place greater stress on the human rights dimension of this challenge. In doing so, the EU can draw upon best practice examples from around the world that tackle disinformation through a human rights lens. This study proposes steps the EU can take to build counter-disinformation more seamlessly into its global human rights and democracy policies.
    [Show full text]
  • Translation and Manipulation
    5785 Forough Rahimi et al./ Elixir Ling. & Trans. 41 (2011) 5785-5790 Available online at www.elixirpublishers.com (Elixir International Journal) Linguistics and Translation Elixir Ling. & Trans. 41 (2011) 5785-5790 Translation and manipulation: a critical discourse analysis case study Forough Rahimi and Mohammad Javad Riasati Department of Foreign Languages, Shiraz Branch, Islamic Azad University, Shiraz, Iran. ARTICLE INFO ABSTRACT Article history: This paper seeks to delineate the constraints imposed on translators of movie dialogues in Received: 20 September 2011; the context of Iran. The movie ''Platoon'' is selected for this purpose. In this situation a Received in revised form: myriad of external constraints, that is, cultural, religious, and political, lead to rewriting or 16 November 2011; manipulation of movie texts. This is an effective means for creating and naturalizing Accepted: 28 November 2011; ideologies, through expurgation, derogation, etc. The movie translations in Iran are scenes of ideological incursions in translation, that is, suppression of dialectal features. They are sites Keywords of ideological clashes in which certain realities are constructed and challenged and Translation, xenophobic attitudes are propagated. The translators counterfeit realities and inculcate them Censorship, in the innocent audience. This paper explores the ideological and socio-political factors Ideological manipulation. which determine the strategies applied in the translation of western movies. © 2011 Elixir All rights reserved. Introduction language is an irreducible part of social life. The dialectic During the past decade educational researchers increasingly relation between language and social reality is realized through have turned to Critical Discourse Analysis (CDA) as a set of social events (texts), social practices (orders of discourse) and approaches to answer questions about the relationships between social structures (languages) [3].
    [Show full text]
  • Coronavirus Social Engineering Attacks: Issues and Recommendations
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 5, 2020 Coronavirus Social Engineering Attacks: Issues and Recommendations Ahmed Alzahrani Faculty of Computing and Information Technology King Abdulaziz University, Jeddah Saudia Arabia Abstract—During the current coronavirus pandemic, Social engineering attacks fall into four types: physical, cybercriminals are exploiting people’s anxieties to steal technical, social, and socio-technical [3]. In general, there are confidential information, distribute malicious software, perform two methods of social engineering attacks, human-based and ransomware attacks and use other social engineering attacks. computer-based. Human-based social engineering requires The number of social engineering attacks is increasing day by interaction with humans to gain the desired information. day due to people's failure to recognize the attacks. Therefore, Impersonation is the most common approach for this type, via there is an urgent need for solutions to help people understand a phone call or text message (see Fig. 2), online, or even in social engineering attacks and techniques. This paper helps person. Computer-based social engineering uses computer individuals and industry by reviewing the most common software to try to gain the required information. This attack coronavirus social engineering attacks and provides includes sending scam emails asking the user to open an recommendations for responding to such an attack. The paper also discusses the psychology behind social engineering and attachment to check the latest statistics about coronavirus or introduces security awareness as a solution to reduce the risk of information about coronavirus safety measures (see Fig. 1). social engineering attacks. Cybercriminals can also create a fake website to trick users into downloading malware to steal users' credentials and online Keywords—Social engineering; coronavirus; COVID-19; banking information.
    [Show full text]
  • Internet Safety 101
    INTERNET SAFETY 101 Learn what you need to know to keep yourself and your family safe on the internet. Why are you here? To learn: About the type of threats you may encounter online To gain a safer mindset towards using the internet Precautions we should all be taking on a regular basis How to protect yourself and your device Google Yourself The easiest way to track your digital footprint Try searching for: Variations of your name Email address Home address Work address Phone number Found something you would like taken down? Alter your privacy settings on social media accounts. Deactivate old and unused accounts. Go through sites on an individual basis to see if you can delete your information from their listings or request to have it removed. Safe Browsing Practices Look at your browser’s built in security settings “Sandboxing” – Each tab is run in a separate application environment. If harmful content is discovered in one tab, it will not spread any further. Always keep browser updated for latest security features Downloading Content Treat all downloads as suspicious until proven safe. Always SAVE files, don’t RUN them. This will give your system a chance to check for threats. No one wants to do it, but read the terms and conditions. Freeware and P2P files should be treated with extreme caution. Safe Browsing Practices Continued… Pop-Ups Always keep your pop-up blockers ON Usually contain unwanted advertisements or malware “Malware” – any software specifically intended to damage your computer system Some pop-ups are legitimate. Use icon in the address bar to allow safe pop-ups.
    [Show full text]
  • Social Engineering Attacks and the Corona Virus Pandemic
    ISSN (Online): 2455-3662 EPRA International Journal of Multidisciplinary Research (IJMR) - Peer Reviewed Journal Volume: 6 | Issue: 10 | October 2020 || Journal DOI: 10.36713/epra2013 || SJIF Impact Factor: 7.032 ||ISI Value: 1.188 SOCIAL ENGINEERING ATTACKS AND THE CORONA VIRUS PANDEMIC Ajayi Adebowale. O.1 Ajayi Oluwabukola. F 2 1 Lecturer, 2 Lecturer, Computer Science Department, Computer Science Department, Babcock University, Babcock University, Ilishan-Remo Ogun State, Ilishan-Remo Ogun State, Nigeria. Nigeria. ABSTRACT Social engineering attacks are amongst the most successful and rampant cyber-attacks. The major focus of social engineering is the manipulation of human targets to further the objectives of an attacker. The present coronavirus pandemic has had a profound effect on how we live and work and has proved to be perfect fodder for the nefarious activities of cybercriminals. This study delineates the underlying problems in social engineering vis-à-vis the corona pandemic. A descriptive survey was carried out on social engineering attacks and the corona virus pandemic including focused group discussions with some cyber attackers and regular users of information systems. A review of current covid- 19 related social engineering schemes are presented. Insights gotten from synthesizing the knowledge gathered from the analysis of social engineering attacks during Covid-19 are also highlighted in this paper. It is very likely that many people will continue to work from home or, at the very least, switch back and forth between home and traditional offices in their normal routines. The study therefore recommends that as the global community stays on high alert for signs of new pandemics or recurrence of the present one, safeguards will need to be put in place ensure that these anxieties don’t expose enterprise IT assets to social engineering tactics.
    [Show full text]
  • Censorship As Optimal Persuasion
    CENSORSHIP AS OPTIMAL PERSUASION Anton Kolotilin, Tymofiy Mylovanov, Andriy Zapechelnyuk Abstract. We consider a Bayesian persuasion problem where a sender's utility depends only on the expected state. We show that upper censorship that pools the states above a cutoff and reveals the states below the cutoff is optimal for all prior distributions of the state if and only if the sender's marginal utility is quasi-concave. Moreover, we show that it is optimal to reveal less information if the sender becomes more risk averse or the sender's utility shifts to the left. Finally, we apply our results to the problem of media censorship by a government. JEL Classification: D82, D83, L82 Keywords: Bayesian persuasion, information design, censorship, media Date: May 30, 2021. Kolotilin: UNSW Business School, School of Economics, UNSW Sydney, NSW 2052, Australia. E-mail: [email protected]. Mylovanov: University of Pittsburgh, Department of Economics, 4714 Posvar Hall, 230 South Bou- quet Street, Pittsburgh, PA 15260, USA. E-mail: [email protected]. Zapechelnyuk: University of St Andrews, School of Economics and Finance, Castlecliffe, the Scores, St Andrews KY16 9AR, UK. E-mail: [email protected]. We are grateful for discussions with Ming Li with whom we worked on the companion paper Kolotilin et al. (2017). An early version of the results in this paper and the results in the companion paper were presented in our joint working paper Kolotilin et al. (2015). We thank the anonymous refer- ees, Ricardo Alonso, Dirk Bergemann, Simon Board, Patrick Bolton, Alessandro Bonatti, Steven Callander, Odilon C^amara,Rahul Deb, P´eterEs¨o,Florian Hoffman, Johannes H¨orner,Roman In- derst, Emir Kamenica, Navin Kartik, Daniel Kr¨aehmer,Hongyi Li, Marco Ottaviani, Mallesh Pai, Andrea Prat, Larry Samuelson, Ilya Segal, Joel Sobel, Konstantin Sonin, Christopher Teh, as well as many conference and seminar participants, for helpful comments.
    [Show full text]
  • Does 'Deplatforming' Work to Curb Hate Speech and Calls for Violence?
    Exercise Nº Professor´s Name Mark 1. Reading Comp. ………………… .…/20 2. Paraphrasing ………………… .…/30 Total Part I (Min. 26).…/50 Part Part I 3. Essay ………………… …/50 Re correction …………………… …/50 Essay Final Mark …………………… …/50 Part Part II (do NOT fill in) Total Part II (Min.26) …/50 CARRERA DE TRADUCTOR PÚBLICO - ENTRANCE EXAMINATION – FEBRERO 2021 NOMBRE y APELLIDO: …………………………………………………………………………… Nº de ORDEN: (NO es el DNI) ……………………………………………………………………. Please read the text carefully and then choose the best answer. Remember the questions do not follow the order of the reading passage (Paragraphs are numbered for the sake of correction) Does ‘deplatforming’ work to curb hate speech and calls for violence? 3 experts in online communications weigh in 1- In the wake of the assault on the U.S. Capitol on Jan. 6, Twitter permanently suspended Donald Trump’s personal account, and Google, Apple, and Amazon shunned Parler, which at least temporarily shut down the social media platform favored by the far right. 2- Dubbed “deplatforming,” these actions restrict the ability of individuals and communities to communicate with each other and the public. Deplatforming raises ethical and legal questions, but foremost is the question of whether it is an effective strategy to reduce hate speech and calls for violence on social media. 3- The Conversation U.S. asked three experts in online communications whether deplatforming works and what happens when technology companies attempt it. Jeremy Blackburn, assistant professor of computer science, Binghamton University 4-Does deplatforming work from a technical perspective? Gab was the first “major” platform subject to deplatforming efforts, first with removal from app stores and, after the Tree of Life shooting, the withdrawal of cloud infrastructure providers, domain name providers and other Web-related services.
    [Show full text]
  • Internet Censorship in Turkey: University Students' Opinions
    World Journal on Educational Technology 1 (2009) 46‐56 www.world‐education‐center.org/index.php/wjet Internet censorship in Turkey: University students' opinions * Hasan Ozkan a , Arda Arikan b a Hacettepe University, Faculty of Education, Beytepe, 06800, Ankara, Turkey b Hacettepe University, Faculty of Education, Beytepe, 06800, Ankara, Turkey Received April 30, 2009; revised May 31, 2009; accepted June 22 , 2009 Abstract The aim of this paper is to study university students’ opinions toward online censorship with references to their socio‐ political and economic variables. Considering the upwards trend and the increasing number of online restrictions in Turkey, the opinions of university students (n=138) are thought to give significant findings. The questionnaire aimed to collect data on some chosen aspects of censorship which are internet access regulation by legal authorities, online pornographic content, websites distributing illegal and crime related information and the political and religious aspects of the internet censorship. The findings about these four basic aspects toward the online censorship revealed that despite the high proportion of approval, participants have a confused and inconsistent attitude towards the issue especially within the application and the idea of censorship. In a broader aspect, the findings can be interpreted as a sign of university students’ insufficient knowledge and understanding towards the current situation and the possible future of online censorship in Turkey. Keywords: Internet; censor; censorship;
    [Show full text]
  • Persuasion of Heterogenous Audiences and Optimal Media Censorship
    Persuasion of Heterogenous Audiences and Optimal Media Censorship Tymofiy Mylovanov joint work with Anton Kolotilin, Ming Li, Andriy Zapechelnyuk October 6, 2017 1 Environment Sender Receiver Receiver makes a binary decision (action or inaction) 2 • To convince Receiver to act, Sender controls disclosure of decision relevant information: • Receiver's preferences are private • Sender fully controls persuasion 3 Timing 1. Sender commits to an information disclosure mechanism 2. Sender observes the state & Receiver observes his type 3. Communication between Sender and Receiver 4. Receiver updates his beliefs and decides whether to act 4 Examples • Ratings given by credit agency • Grading policies designed by school • Media controlled by government 5 Questions • What can be achieved purely by means of persuasion? • What can persuasion mechanisms gain over experiments? • What persuasion mechanisms are optimally chosen? 6 Model • Receiver must choose between inaction and action, a 2 f0; 1g. • Receiver's private type r ∼ G on R = [0; 1] • State of the world ! ∼ F on Ω = [0; 1] • Random variables r and ! are independent • Distributions of F and G are common knowledge 7 Payoffs • Receiver's payoff from actions (a = 0; 1): 8 <! − r; a = 1 u(!; r; a) = :0; a = 0 • Sender's payoff from actions (a = 0; 1): 8 <1 + ρ(r) (! − r) ; a = 1 v(!; r; a) = :0; a = 0 where ρ(r) 2 R 8 Receiver's Private Information: Interpretation • One Receiver { Incomplete information about Receiver's type • A continuum of heterogeneous Receivers { an audience 9 Persuasion mechanism
    [Show full text]