A Survey on Computational Propaganda Detection

Total Page:16

File Type:pdf, Size:1020Kb

A Survey on Computational Propaganda Detection Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Survey Track A Survey on Computational Propaganda Detection Giovanni Da San Martino1∗ , Stefano Cresci2 , Alberto Barron-Cede´ no˜ 3 , Seunghak Yu4 , Roberto Di Pietro5 and Preslav Nakov1 1Qatar Computing Research Institute, HBKU, Doha, Qatar 2Institute of Informatics and Telematics, IIT-CNR, Pisa, Italy 3DIT, Alma Mater Studiorum–Universita` di Bologna, Forl`ı, Italy 4MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, USA 5College of Science and Engineering, HBKU, Doha, Qatar fgmartino, rdipietro, [email protected], [email protected], [email protected], [email protected] Abstract Whereas false statements are not really a new phenomenon —e.g., yellow press has been around for decades— this time Propaganda campaigns aim at influencing people’s things were notably different in terms of scale and effective- mindset with the purpose of advancing a specific ness thanks to social media, which provided both a medium to agenda. They exploit the anonymity of the Internet, reach millions of users and an easy way to micro-target spe- the micro-profiling ability of social networks, and cific narrow groups of voters based on precise geographic, the ease of automatically creating and managing demographic, psychological, and/or political profiling. coordinated networks of accounts, to reach millions An important aspect of the problem that is often largely of social network users with persuasive messages, ignored is the mechanism through which disinformation specifically targeted to topics each individual user is being conveyed, which is using propaganda techniques. is sensitive to, and ultimately influencing the out- These include specific rhetorical and psychological tech- come on a targeted issue. In this survey, we review niques, ranging from leveraging on emotions —such as us- the state of the art on computational propaganda de- ing loaded language, flag waving, appeal to authority, slo- tection from the perspective of Natural Language gans, and cliches´ — to using logical fallacies —such as straw Processing and Network Analysis, arguing about men (misrepresenting someone’s opinion), red herring (pre- the need for combined efforts between these com- senting irrelevant data), black-and-white fallacy (presenting munities. We further discuss current challenges and two alternatives as the only possibilities), and whataboutism. future research directions. Moreover, the problem is exacerbated by the fact that propa- ganda does not necessarily have to lie; it could appeal to emo- 1 Introduction tions or cherry-pick the facts. Thus, we believe that specific The Web makes it possible for anybody to create a website or research on propaganda detection is a relevant contribution in a blog and to become a news medium. Undoubtedly, this is a the fight against online disinformation. hugely positive development as it elevates freedom of expres- Here, we focus on computational propaganda, which is sion to a whole new level, giving anybody the opportunity to defined as “propaganda created or disseminated using com- make their voice heard. With the rise of social media, every- putational (technical) means” [Bolsover and Howard, 2017]. one can reach out to a very large audience, something that Traditionally, propaganda campaigns had been a monopoly until recently was only possible for major news outlets. of state actors, but nowadays they are within reach for However, this new avenue for self-expression has brought various groups and even for individuals. One key ele- also unintended consequences, the most evident one being ment of such campaigns is that they often rely on coordi- that the society has been left unprotected against potential nated efforts to spread messages at scale. Such coordina- manipulation from a multitude of sources. The issue be- tion is achieved by leveraging botnets (groups of fully au- came of general concern in 2016, a year marked by micro- tomated accounts) [Zhang et al., 2016], cyborgs (partially targeted online disinformation and misinformation at an un- automated) [Chu et al., 2012] and troll armies (human- precedented scale, primarily in connection to Brexit and the driven) [Linvill and Patrick, 2018], known as sockpuppets US Presidential campaign; then, in 2020, the COVID-19 pan- [Kumar et al., 2017], Internet water army [Chen et al., 2013], demic also gave rise to the first global infodemic. Spread- astroturfers [Ratkiewicz et al., 2011], and seminar users ing disinformation disguised as news created the illusion that [Darwish et al., 2017]. Thus, a promising direction to thwart the information was reliable, and thus people tended to lower propaganda campaigns is to discover such coordination; this their natural barrier of critical thinking compared to when in- is demonstrated by recent interest by Facebook1 and Twitter2. formation came from different types of sources. 1 newsroom.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/ ∗ 2 Contact Author https://help.twitter.com/en/rules-and-policies/platform-manipulation 4826 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Survey Track In order for propaganda campaigns to work, it is critical Although lying and creating fake stories is considered as that they go unnoticed. This further motivates work on de- one of the propaganda techniques (some authors refer to it as tecting and exposing propaganda campaigns, which should “black propaganda” [Jowett and O’Donnell, 2012]), there are make them increasingly inefficient. Given the above, in the contexts where this course of actions is often done without present survey, we focus on computational propaganda from pursuing the objective to influence the audience, as in satire two perspectives: (i) the content of the propaganda messages and clickbaiting. These special cases are of less interest when and (ii) their propagation in social networks. it comes to fighting the weaponization of social media, and Finally, it is worth noting that, even though there have been are therefore considered out of the scope for this survey. several recent surveys on fake news detection [Shu at al., 2017; Zhou et al., 2019], fact-checking [Thorne and Vlachos, 3 Text Analysis Perspective 2018], and truth discovery [Li et al., 2016], none of them fo- cuses on computational propaganda. There has also been a Research on propaganda detection based on text analysis has special issue of the Big Data journal on Computational Pro- a short history, mainly due to the lack of suitable annotated paganda and Political Big Data [Bolsover and Howard, 2017], datasets for training supervised models. There have been but it did not include a survey. Here we aim to bridge this gap. some relevant initiatives, where expert journalists or volun- teers analyzed entire news outlets, which could be used for 6 2 Propaganda training. For example, Media Bias/Fact Check (MBFC) is an independent organization analyzing media in terms of The term propaganda was coined in the 17th century, and ini- their factual reporting, bias, and propagandist content, among tially referred to the propagation of the Catholic faith in the other aspects. Similar initiatives are run by US News & New World [Jowett and O’Donnell, 2012, p. 2]. It soon took World Report7 and the European Union.8 Such data has been a pejorative connotation, as its meaning was extended to also used in distant supervision approaches [Mintz et al., 2009], mean opposition to Protestantism. In more recent times, back i.e., by assigning each article from a given news outlet the in 1938, the Institute for Propaganda Analysis [Ins, 1938], label propagandistic/non-propagandistic using the label for defined propaganda as “expression of opinion or action by in- that news outlet. Unfortunately, such coarse approximation dividuals or groups deliberately designed to influence opin- inevitably introduces noise to the learning process, as we dis- ions or actions of other individuals or groups with reference cuss in Section 5. to predetermined ends”. In the remainder of this section, we review current work on Recently, Bolsover et. al [2017] dug deeper into this def- propaganda detection from a text analysis perspective. This inition identifying its two key elements: (i) trying to influ- includes the production of annotated datasets, characterizing ence opinion, and (ii) doing so on purpose. Influencing opin- entire documents, and detecting the use of propaganda tech- ions is achieved through a series of rhetorical and psycho- niques at the span level. logical techniques. Clyde R. Miller in 1937 proposed one of the seminal categorizations of propaganda, consisting of 3.1 Available Datasets [ ] seven devices Ins, 1938 , which remain well accepted to- Given that existing models to detect propaganda in text are [ ] day Jowett and O’Donnell, 2012, p.237 : name calling, glit- supervised, annotated corpora are necessary. Table 1 shows tering generalities, transfer, testimonial, plain folks, card an overview of the available corpora (to the best of our knowl- stacking, and bandwagon. Other scholars consider catego- edge), with annotation both at the document and at the frag- [ rizations with as many as eighty-nine techniques Conserva, ment level. ] 3 2003 , and Wikipedia lists about seventy techniques. How- Rashkin et al. [2017] released TSHP-17, a balanced cor- ever, these larger sets of techniques are essentially subtypes pus with document-level annotation including
Recommended publications
  • The Evocative and Persuasive Power of Loaded Words in the Political Discourse on Drug Reform
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by OpenEdition Lexis Journal in English Lexicology 13 | 2019 Lexicon, Sensations, Perceptions and Emotions Conjuring up terror and tears: the evocative and persuasive power of loaded words in the political discourse on drug reform Sarah Bourse Electronic version URL: http://journals.openedition.org/lexis/3182 DOI: 10.4000/lexis.3182 ISSN: 1951-6215 Publisher Université Jean Moulin - Lyon 3 Electronic reference Sarah Bourse, « Conjuring up terror and tears: the evocative and persuasive power of loaded words in the political discourse on drug reform », Lexis [Online], 13 | 2019, Online since 14 March 2019, connection on 20 April 2019. URL : http://journals.openedition.org/lexis/3182 ; DOI : 10.4000/ lexis.3182 This text was automatically generated on 20 April 2019. Lexis is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Conjuring up terror and tears: the evocative and persuasive power of loaded w... 1 Conjuring up terror and tears: the evocative and persuasive power of loaded words in the political discourse on drug reform Sarah Bourse Introduction 1 In the last few years, the use of the adjective “post-truth” has been emerging in the media to describe the political scene. This concept has gained momentum in the midst of fake allegations during the Brexit vote and the 2016 American presidential election, so much so that The Oxford Dictionary elected it word of the year in 2016. Formerly referring to the lies at the core of political scandals1, the term “post-truth” now describes a situation where the objective facts have far less importance and impact than appeals to emotion and personal belief in order to influence public opinion2.
    [Show full text]
  • Lesson 2-Resource 2.2 Propaganda Techniques
    ELA Grade 6 Plugging into Propaganda, Lesson 2 Resource 2.2 Propaganda Techniques “Propaganda Techniques,” Holt Literature & Language Arts, Introductory Course, 2003, pp. 643-645 1 Propaganda techniques appeal more to your emotions than to common sense or logic. Like persuasive techniques, they are used to convince you to think, feel, or act a certain way. The difference is that a propagandist, a person who uses propaganda techniques, does not want you to think critically about the message. 2 For example, when you hear the name of a product or see its logo* associated with your favorite football team, your excitement for that team is being used to sell that product. If you connect your excitement about the team with the product enough times, this propaganda technique, known as transfer, may eventually persuade you to buy the product. Your decision would be based not on logical reasons for buying the product but on your emotional response to the propaganda technique. 3 The following chart gives definitions and examples of other common propaganda techniques found in television ads and programs. As you watch TV, look for the given clues to identify these techniques in every kind of programming you watch. PROPAGANDA TECHNIQUES USED ON TELEVISION Techniques Clues Examples Bandwagon tries to convince you Listen for slogans that use the While being interviewed on a talk to do something or believe words everyone, everybody, all, or show, an author might encourage something because everyone else in some cases, nobody. viewers to join the thousands of does. other people who have benefited from his new diet book.
    [Show full text]
  • Freedom on the Net 2016
    FREEDOM ON THE NET 2016 China 2015 2016 Population: 1.371 billion Not Not Internet Freedom Status Internet Penetration 2015 (ITU): 50 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 18 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 30 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 40 40 TOTAL* (0-100) 88 88 Press Freedom 2016 Status: Not Free * 0=most free, 100=least free Key Developments: June 2015 – May 2016 • A draft cybersecurity law could step up requirements for internet companies to store data in China, censor information, and shut down services for security reasons, under the aus- pices of the Cyberspace Administration of China (see Legal Environment). • An antiterrorism law passed in December 2015 requires technology companies to cooperate with authorities to decrypt data, and introduced content restrictions that could suppress legitimate speech (see Content Removal and Surveillance, Privacy, and Anonymity). • A criminal law amendment effective since November 2015 introduced penalties of up to seven years in prison for posting misinformation on social media (see Legal Environment). • Real-name registration requirements were tightened for internet users, with unregistered mobile phone accounts closed in September 2015, and app providers instructed to regis- ter and store user data in 2016 (see Surveillance, Privacy, and Anonymity). • Websites operated by the South China Morning Post, The Economist and Time magazine were among those newly blocked for reporting perceived as critical of President Xi Jin- ping (see Blocking and Filtering). www.freedomonthenet.org FREEDOM CHINA ON THE NET 2016 Introduction China was the world’s worst abuser of internet freedom in the 2016 Freedom on the Net survey for the second consecutive year.
    [Show full text]
  • Fake News Detection for the Russian Language
    Fake news detection for the Russian language Gleb Kuzmin∗ Daniil Larionov∗ Moscow Institute of Physics and Technology Federal Research Center / Dolgoprudny, Russia ”Computer Science and Control” [email protected] / Moscow, Russia [email protected] Dina Pisarevskaya∗ Ivan Smirnov Federal Research Center Federal Research Center ”Computer Science and Control” ”Computer Science and Control” / Moscow, Russia / Moscow, Russia [email protected] [email protected] Abstract In this paper, we trained and compared different models for fake news detection in Russian. For this task, we used such language features as bag-of-n-grams and bag of Rhetorical Structure Theory features, and BERT embeddings. We also compared the score of our models with the human score on this task and showed that our models deal with fake news detection better. We investigated the nature of fake news by dividing it into two non-overlapping classes: satire and fake news. As a result, we obtained the set of models for fake news detection; the best of these models achieved 0.889 F1-score on the test set for 2 classes and 0.9076 F1-score on 3 classes task. 1 Introduction Fake news detection becomes a more significant task. It is connected with an increasing number of news in media and social media. To prevent rumors and misinformation from spreading in this news, we need to have a system able to detect fake news. It also could be useful because it’s hard for people to recognize fake news. Approaches to this task are being developed for English. However, fake news texts can be written originally in different ’source’ languages, i.e.
    [Show full text]
  • Neural Architectures for Fine-Grained Propaganda Detection in News
    Neural Architectures for Fine-Grained Propaganda Detection in News Pankaj Gupta1,2, Khushbu Saxena1, Usama Yaseen1,2, Thomas Runkler1, Hinrich Schutze¨ 2 1Corporate Technology, Machine-Intelligence (MIC-DE), Siemens AG Munich, Germany 2CIS, University of Munich (LMU) Munich, Germany [email protected] | [email protected] Abstract techniques in news articles at sentence and frag- ment level, respectively and thus, promotes ex- This paper describes our system (MIC-CIS) plainable AI. For instance, the following text is a details and results of participation in the propaganda of type ‘slogan’. fine-grained propaganda detection shared task Trump tweeted: ‘`BUILD THE WALL!" 2019. To address the tasks of sentence (SLC) | {z } and fragment level (FLC) propaganda detec- slogan tion, we explore different neural architectures Shared Task: This work addresses the two (e.g., CNN, LSTM-CRF and BERT) and ex- tasks in propaganda detection (Da San Mar- tract linguistic (e.g., part-of-speech, named en- tino et al., 2019) of different granularities: (1) tity, readability, sentiment, emotion, etc.), lay- Sentence-level Classification (SLC), a binary clas- out and topical features. Specifically, we have sification that predicts whether a sentence con- designed multi-granularity and multi-tasking tains at least one propaganda technique, and (2) neural architectures to jointly perform both the Fragment-level Classification (FLC), a token-level sentence and fragment level propaganda de- (multi-label) classification that identifies both the tection. Additionally, we investigate different ensemble schemes such as majority-voting, spans and the type of propaganda technique(s). relax-voting, etc. to boost overall system per- Contributions: (1) To address SLC, we de- formance.
    [Show full text]
  • Media Bias, the Social Sciences, and NLP: Automating Frame Analyses to Identify Bias by Word Choice and Labeling
    Media Bias, the Social Sciences, and NLP: Automating Frame Analyses to Identify Bias by Word Choice and Labeling Felix Hamborg Dept. of Computer and Information Science University of Konstanz, Germany [email protected] Abstract Even subtle changes in the words used in a news text can strongly impact readers’ opinions (Pa- Media bias can strongly impact the public per- pacharissi and de Fatima Oliveira, 2008; Price et al., ception of topics reported in the news. A dif- 2005; Rugg, 1941; Schuldt et al., 2011). When re- ficult to detect, yet powerful form of slanted news coverage is called bias by word choice ferring to a semantic concept, such as a politician and labeling (WCL). WCL bias can occur, for or other named entities (NEs), authors can label example, when journalists refer to the same the concept, e.g., “illegal aliens,” and choose from semantic concept by using different terms various words to refer to it, e.g., “immigrants” or that frame the concept differently and conse- “aliens.” Instances of bias by word choice and label- quently may lead to different assessments by ing (WCL) frame the referred concept differently readers, such as the terms “freedom fighters” (Entman, 1993, 2007), whereby a broad spectrum and “terrorists,” or “gun rights” and “gun con- trol.” In this research project, I aim to devise of effects occurs. For example, the frame may methods that identify instances of WCL bias change the polarity of the concept, i.e., positively and estimate the frames they induce, e.g., not or negatively, or the frame may emphasize specific only is “terrorists” of negative polarity but also parts of an issue, such as the economic or cultural ascribes to aggression and fear.
    [Show full text]
  • Classification of Propaganda on Fragment Level: Using Logistic Regression with Handcrafted Contextual Features
    Classification of Propaganda on Fragment Level: Using Logistic Regression with Handcrafted Contextual Features Author Supervisor S. (Sylvain) Maissan Dong Nguyen Second Examiner ​ Frans Adriaans June 26, 2020 1 Abstract Propaganda in the media has become a rising problem, especially after automation. The ease of which propaganda can be created and spread is astonishing. A way to combat this is an automated propaganda detection system. The goal of fine-grained propaganda detection is to determine whether a given sentence uses a propaganda technique, or to recognize which techniques are used on the fragment level. In this paper we try to analyze the effects of contextual features on the fragment level when training a propaganda classifier. Using a logistic regression model I created some handcrafted features that solely depend on contextual information. The results showed no significant impact on the performance. The features based on the propagandistic fragment itself prove to be the top features in this setting. In future research it is recommended to create either more complex contextual features or to create features that are able to discern whether the fragment is Loaded Language or Name Calling. ​ ​ ​ ​ Keywords: Propaganda Classification, Fragment Level, Contextual Features, ​ 2 Contents 1. Introduction 4 2. Related Work 6 3. Datasets 8 4. Methods 11 4.1. Preprocessing 11 4.2. Features 12 4.3. Baseines 14 4.4. Evaluation 15 5. Results 16 6. Discussion 18 7. Conclusion 19 3 1 Introduction Propaganda aims at influencing people’s mindset with the purpose of advancing a specific agenda. It uses psychological and rhetorical techniques to persuade the public.
    [Show full text]
  • Battling the Internet Water Army: Detection of Hidden Paid Posters
    Battling the Internet Water Army: Detection of Hidden Paid Posters Cheng Chen Kui Wu Venkatesh Srinivasan Xudong Zhang Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science University of Victoria University of Victoria University of Victoria Peking University Victoria, BC, Canada Victoria, BC, Canada Victoria, BC, Canada Beijing, China Abstract—We initiate a systematic study to help distinguish on different online communities and websites. Companies are a special group of online users, called hidden paid posters, or always interested in effective strategies to attract public atten- termed “Internet water army” in China, from the legitimate tion towards their products. The idea of online paid posters ones. On the Internet, the paid posters represent a new type of online job opportunity. They get paid for posting comments is similar to word-of-mouth advertisement. If a company hires and new threads or articles on different online communities enough online users, it would be able to create hot and trending and websites for some hidden purposes, e.g., to influence the topics designed to gain popularity. Furthermore, the articles opinion of other people towards certain social events or business or comments from a group of paid posters are also likely markets. Though an interesting strategy in business marketing, to capture the attention of common users and influence their paid posters may create a significant negative effect on the online communities, since the information from paid posters is usually decision. In this way, online paid posters present a powerful not trustworthy. When two competitive companies hire paid and efficient strategy for companies.
    [Show full text]
  • Euphemism and Language Change: the Sixth and Seventh Ages
    Lexis Journal in English Lexicology 7 | 2012 Euphemism as a Word-Formation Process Euphemism and Language Change: The Sixth and Seventh Ages Kate Burridge Electronic version URL: http://journals.openedition.org/lexis/355 DOI: 10.4000/lexis.355 ISSN: 1951-6215 Publisher Université Jean Moulin - Lyon 3 Electronic reference Kate Burridge, « Euphemism and Language Change: The Sixth and Seventh Ages », Lexis [Online], 7 | 2012, Online since 25 June 2012, connection on 30 April 2019. URL : http:// journals.openedition.org/lexis/355 ; DOI : 10.4000/lexis.355 Lexis is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Lexis 7: “Euphemism as a Word-Formation Process” 65 Euphemism and Language Change: The Sixth and Seventh Ages Kate Burridge1 Abstract No matter which human group we look at, past or present, euphemism and its counterpart dysphemism are powerful forces and they are extremely important for the study of language change. They provide an emotive trigger for word addition, word loss, phonological distortion and semantic shift. Word taBoo plays perpetual havoc with the methods of historical comparative linguistics, even undermining one of the cornerstones of the discipline – the arBitrary nature of the word. When it comes to taBoo words, speakers Behave as if there were a very real connection Between the physical shape of words and their taBoo sense. This is why these words are so unstaBle and why they are so powerful. This paper reviews the various communicative functions of euphemisms and the different linguistic strategies that are used in their creation, focusing on the linguistic creativity that surrounds the topic of ‘old age’ in Modern English (Shakespeare’s sixth and seventh ages).
    [Show full text]
  • Sentiment Patterns in Videos from Left- and Right-Wing Youtube News Channels
    Uphill from here: Sentiment patterns in videos from left- and right-wing YouTube news channels Felix Soldner1, Justin Chun-ting Ho2, Mykola Makhortykh3, Isabelle van der Vegt1, Maximilian Mozes1;4 and Bennett Kleinberg1;3 1University College London 2University of Edinburgh 3University of Amsterdam 4Technical University of Munich 1{felix.soldner, isabelle.vandervegt, bennett.kleinberg}@ucl.ac.uk [email protected], [email protected], [email protected] Abstract Netherlands) (Newman et al., 2018). The reasons for the growing online consumption of news are News consumption exhibits an increasing shift many: digital outlets provide higher interactivity towards online sources, which bring platforms such as YouTube more into focus. Thus, the and greater freedom of choice for news readers distribution of politically loaded news is eas- (Van Aelst et al., 2017), and the possibility to ac- ier, receives more attention, but also raises the cess them through mobile devices enables more concern of forming isolated ideological com- flexibility and convenience in consuming news munities. Understanding how such news is content (Schrøder, 2015). communicated and received is becoming in- Despite the multiple benefits associated with creasingly important. To expand our under- standing in this domain, we apply a linguistic "the digital turn" in news consumption, it also temporal trajectory analysis to analyze senti- raises concerns regarding possible changes in the ment patterns in English-language videos from societal role of news media (Coddington, 2015). news channels on YouTube. We examine tran- It has been established that there is a relationship scripts from videos distributed through eight between news consumption and the formation of channels with pro-left and pro-right political public agendas, including political attitudes of the leanings.
    [Show full text]
  • Literature and Propaganda in George Orwell's Essays
    LITERATURE AND PROPAGANDA IN GEORGE ORWELL'S ESSAYS Nina Sirkovi ć University of Split, Croatia [email protected] Abstract The paper explores the idea of manipulating language for the purpose of sending emotional messages and creating impressions in order to frame the reality to one's intentions. Among many other writers, George Orwell pointed out the social and political determination of literary writing and the author's responsibility in shaping the public opinion. In his essays, Orwell challenges the position of literature and art in general, concerning the society and political situation of his age. He questions the role of the writer in the age of totalitarianism, when the attitude to art is politically coloured and the integrity of the artist is threatened. The aim of the paper is to analyse several of Orwell's essays written between 1940 and 1949 in order to determine to what extent the relationship between art and politics is possible, provided that the artists still preserve their own sincerity and, as Orwell calls it “honest language“. Key-words: loaded language,literature, art, propaganda, totalitarianism. Introduction The world we live in is highly determined by the politics which influences our way of living, not only in a materialistic sense, but also determining in a great proportion our thinking, decision making and consequently, acting. Although loaded language is present in almost every aspect of human communication, politically loaded language can be dangerous since politics affects viewpoints of an enormous number of people, sometimes whole nations. Freely and Steinberg claim that loaded language provides many possibilities for obstacles to clear thinking.
    [Show full text]
  • An Analysis of the Linguistic Techniques of Propaganda in Marketing and Politics
    International Journal of English Learning and Teaching Skills; Vol. 2, No. 2; ISSN : 2639-7412 (Print) ISSN : 2638-5546 (Online) Running head: AN ANALYSIS OF THE LINGUISTIC TECHNIQUES 1 AN ANALYSIS OF THE LINGUISTIC TECHNIQUES OF PROPAGANDA IN MARKETING AND POLITICS Hanan Khaja Mohammad Irfan Institute of Engineering & Management, Kolkata 1181 International Journal of English Learning and Teaching Skills; Vol. 2, No. 2; ISSN : 2639-7412 (Print) ISSN : 2638-5546 (Online) AN ANALYSIS OF THE LINGUISTIC TECHNIQUES 2 Abstract Propaganda may be defined as the deliberate use of manipulation techniques in order to persuade the audience towards a biased opinion which helps a greater power or authority to develop the desirable perspective/choice in the target group. Though initially recognised with positive connotations, “propaganda” today is recognised as corrupt means frequently applied in politics and advertisements to subtly influence public opinion on a large scale which appear seemingly stochastic. This paper intends to explore the various linguistics techniques applied in order to promulgate propaganda and the effects they tend to have on the audience. Amongst the techniques varying from auditory to visual modes of propagation, this paper analyses the modes of establishing rhetoric through the use of language, standard parlance, prosody and other semiotic modalities in a public discourse. Keywords: Propaganda, manipulation, linguistic, techniques, language, parlance, prosody, semiotic modalities. 1182 International Journal of English Learning and Teaching Skills; Vol. 2, No. 2; ISSN : 2639-7412 (Print) ISSN : 2638-5546 (Online) AN ANALYSIS OF THE LINGUISTIC TECHNIQUES 3 AN ANALYSIS OF THE LINGUISTIC TECHNIQUES OF PROPAGANDA IN MARKETING AND POLITICS WHAT IS PROPAGANDA The origins of the semantics of propaganda are rooted in promulgation of “ideas that would not occur naturally, but only via a cultivated or artificial generation” by the Roman Catholic Church.
    [Show full text]