(A Thousand Points Of) Light on Biased Language

(A Thousand Points Of) Light on Biased Language

Shedding (a Thousand Points of) Light on Biased Language Tae Yano Philip Resnik School of Computer Science Department of Linguistics and UMIACS Carnegie Mellon University University of Maryland Pittsburgh, PA 15213, USA College Park, MD 20742, USA [email protected] [email protected] Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Abstract To what extent a sentence or clause is biased (none, • somewhat, very); This paper considers the linguistic indicators of bias in political text. We used Amazon Mechanical Turk The nature of the bias (very liberal, moderately lib- • judgments about sentences from American political eral, moderately conservative, very conservative, bi- blogs, asking annotators to indicate whether a sen- ased but not sure which direction); and tence showed bias, and if so, in which political di- rection and through which word tokens. We also Which words in the sentence give away the author’s asked annotators questions about their own political • views. We conducted a preliminary analysis of the bias, similar to “rationale” annotations in Zaidan et data, exploring how different groups perceive bias in al. (2007). different blogs, and showing some lexical indicators strongly associated with perceived bias. For example, a participant might identify a moderate liberal bias in this sentence, 1 Introduction Without Sestak’s challenge, we would have Bias and framing are central topics in the study of com- Specter, comfortably ensconced as a Democrat munications, media, and political discourse (Scheufele, in name only. 1999; Entman, 2007), but they have received relatively adding checkmarks on the underlined words. A more little attention in computational linguistics. What are the neutral paraphrase is: linguistic indicators of bias? Are there lexical, syntactic, topical, or other clues that can be computationally mod- Without Sestak’s challenge, Specter would eled and automatically detected? have no incentive to side more frequently with Here we use Amazon Mechanical Turk (MTurk) to en- Democrats. gage in a systematic, empirical study of linguistic indi- cators of bias in the political domain, using text drawn It is worth noting that “bias,” in the sense we are us- from political blogs. Using the MTurk framework, we ing it here, is distinct from “subjectivity” as that topic collected judgments connected with the two dominant has been studied in computational linguistics. Wiebe schools of thought in American politics, as exhibited in et al. (1999) characterize subjective sentences as those single sentences. Since no one person can claim to be an that “are used to communicate the speaker’s evaluations, unbiased judge of political bias in language, MTurk is an opinions, and speculations,” as distinguished from sen- attractive framework that lets us measure perception of tences whose primary intention is “to objectively com- bias across a population. municate material that is factual to the reporter.” In con- trast, a biased sentence reflects a “tendency or preference 2 Annotation Task towards a particular perspective, ideology or result.”1 A subjective sentence can be unbiased (I think that movie We drew sentences from a corpus of American political was terrible), and a biased sentence can purport to com- blog posts from 2008. (Details in Section 2.1.) Sentences municate factually (Nationalizing our health care system were presented to participants one at a time, without con- text. Participants were asked to judge the following (see 1http://en.wikipedia.org/wiki/Bias as of 13 April, Figure 1 for interface design): 2010. is a point of no return for government interference in the Liberal Conservative lives of its citizens2). thinkprogress org exit question In addition to annotating sentences, each participant video thinkprogress hat tip was asked to complete a brief questionnaire about his or et rally ed lasky her own political views. The survey asked: org 2008 hot air gi bill tony rezko 1. Whether the participant is a resident of the United wonk room ed morrissey States; dana perino track record 2. Who the participant voted for in the 2008 U.S. phil gramm confirmed dead presidential election (Barack Obama, John McCain, senator mccain american thinker other, decline to answer); abu ghraib illegal alien 3. Which side of political spectrum he/she identified Table 1: Top ten “sticky” partisan bigrams for each side. with for social issues (liberal, conservative, decline to answer); and 2.2 Sentence Selection 4. Which side of political spectrum he/she identified with for fiscal/economic issues (liberal, conserva- To support exploratory data analysis, we sought a di- tive, decline to answer). verse sample of sentences for annotation, but we were also guided by some factors known or likely to correlate This information was gathered to allow us to measure with bias. We extracted sentences from our corpus that variation in bias perception as it relates to the stance of matched at least one of the categories below, filtering to the annotator, e.g., whether people who view themselves keep those of length between 8 and 40 tokens. Then, for as liberal perceive more bias in conservative sources, and each category, we first sampled 100 sentences without re- vice versa. placement. We then randomly extracted sentences up to 2.1 Dataset 1,100 from the remaining pool. We selected the sentences this way so that the collection has variety, while including We extracted our sentences from the collection of blog enough examples for individual categories. Our goal was posts in Eisenstein and Xing (2010). The corpus con- to gather at least 1,000 annotated sentences; ultimately sists of 2008 blog posts gathered from six sites focused we collected 1,041. The categories are as follows. on American politics: American Thinker (conservative),3 “Sticky” partisan bigrams. One likely indicator of • bias is the use of terms that are particular to one side or Digby (liberal),4 the other in a debate (Monroe et al., 2008). In order to • identify such terms, we independently created two lists Hot Air (conservative),5 • of “sticky” (i.e., strongly associated) bigrams in liberal Michelle Malkin (conservative),6 and conservative subcorpora, measuring association us- • ing the log-likelihood ratio (Dunning, 1993) and omitting Think Progress (liberal),7 and 11 • bigrams containing stopwords. We identified a bigram Talking Points Memo (liberal).8 as “liberal” if it was among the top 1,000 bigrams from • the liberal blogs, as measured by strength of association, 13,246 posts were gathered in total, and 261,073 sen- and was also not among the top 1,000 bigrams on the con- tences were extracted using WebHarvest9 and OpenNLP servative side. The reverse definition yielded the “conser- 1.3.0.10 Conservative and liberal sites are evenly rep- vative” bigrams. The resulting liberal list contained 495 resented (130,980 sentences from conservative sites, bigrams, and the conservative list contained 539. We then 130,093 from liberal sites). OpenNLP was also used for manually filtered cases that were clearly remnant HTML tokenization. tags and other markup, arriving at lists of 433 and 535, 2Sarah Palin, http://www.facebook.com/note.php? respectively. Table 1 shows the strongest weighted bi- note_id=113851103434, August 7, 2009. grams. 3http://www.americanthinker.com As an example, consider this sentence (with a preced- 4http://digbysblog.blogspot.com 5http://hotair.com ing sentence of context), which contains gi bill. There is 6http://michellemalkin.com no reason to think the bigram itself is inherently biased 7http://thinkprogress.org (in contrast to, for example, death tax, which we would 8http://www.talkingpointsmemo.com 9http://web-harvest.sourceforge.net 11We made use of Pedersen’s N-gram Statistics Package (Banerjee 10http://opennlp.sourceforge.net and Pedersen, 2003). perceive as biased in virtually any unquoted context), but to appear in sentences containing bias (they overlap sig- we do perceive bias in the full sentence. nificantly with Pennebaker’s Negative Emotion list), and because annotation of bias will provide further data rel- Their hard fiscal line softens in the face of evant to Greene and Resnik’s hypothesis about the con- American imperialist adventures. According to nections among semantic propeties, syntactic structures, CongressDaily the Bush dogs are also whining and positive or negative perceptions (which are strongly because one of their members, Stephanie Her- connected with bias). seth Sandlin, didn’t get HER GI Bill to the floor In our final 1,041-sentence sample, “sticky bigrams” in favor of Jim Webb’s . occur 235 times (liberal 113, conservative 122), the lexi- Emotional lexical categories. Emotional words might cal category features occur 1,619 times (Positive Emotion be another indicator of bias. We extracted four categories 577, Negative Emotion 466, Causation 332, and Anger of words from Pennebaker’s LIWC dictionary: Nega- 244), and “kill” verbs appear as a feature in 94 sentences. tive Emotion, Positive Emotion, Causation, and Anger.12 Note that one sentence often matches multiple selection The following is one example of a biased sentence in our criteria. Of the 1,041-sentence sample, 232 (22.3%) are dataset that matched these lexicons, in this case the Anger from American Thinker, 169 (16.2%) from Digby, 246 category; the match is in bold. (23.6%) from Hot Air, 73 (7.0%) from Michelle Malkin, 166 (15.9%) from Think Progress, and 155 (14.9%) from A bunch of ugly facts are nailing the biggest Talking Points Memo. scare story in history. 3 Mechanical Turk Experiment The five most frequent matches in the corpus for each category are as follows.13 We prepared 1,100 Human Intelligence Tasks (HITs), each containing one sentence annotation task. 1,041 sen- Negative Emotion: war attack* problem* numb* argu* tences were annotated five times each (5,205 judgements total). One annotation task consists of three bias judge- like well good party* secur* Positive Emotion: ment questions plus four survey questions.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us