1 Validation of a Novel Climate Change Denial Measure Using Item
Total Page:16
File Type:pdf, Size:1020Kb
1 Validation of a Novel Climate Change Denial Measure Using Item Response Theory Mr. George Lorama*,, Dr. Mathew Linga, Mr. Andrew Heada, Dr. Edward J.R. Clarkeb a Deakin University, Geelong, Australia, Misinformation Lab, School of Psychology b Federation University, Mount Helen, Australia, School of Health and Life Sciences * Corresponding Author, Misinformation Lab, Deakin University, Locked Bag 20001, Geelong VIC 3220, Australia. Email address: [email protected] Declarations of Interest: All authors declare that no conflict of interest exists. Acknowledgements: This work was supported by Deakin University (4th Year Student Research Funding; 4th Year Publication Award). 2 Abstract Climate change denial persists despite overwhelming scientific consensus on the issue. However, the rates of denial reported in the literature are inconsistent, potentially as a function of ad hoc measurement of denial. This further impacts on interpretability and integration of research. This study aims to create a standardised measure of climate change denial using Item Response Theory (IRT). The measure was created by pooling items from existing denial measures, and was administered to a U.S. sample recruited using Amazon MTurk (N = 206). Participants responded to the prototype measure as well as being measured on a number of constructs that have been shown to correlate with climate change denial (authoritarianism, social dominance orientation, mistrust in scientists, and conspiracist beliefs). Item characteristics were calculated using a 2-parameter IRT model. After screening out poorly discriminating and redundant items, the scale contained eight items. Discrimination indices were high, ranging from 2.254 to 30.839, but item difficulties ranged from 0.437 to 1.167, capturing a relatively narrow band of climate change denial. Internal consistency was high, ω = .94. Moderate to strong correlations were found between the denial measure and the convergent measures. This measure is a novel and efficient approach to the measurement of climate change denial and includes highly discriminating items that could be used as screening tools. The limited range of item difficulties suggests that different forms of climate change denial may be closer together than previously thought. Future research directions include validating the measure in larger samples, and examining the predictive utility of the measure. Keywords: Climate Change Denial, Measurement, Item Response Theory 3 Validation of a Novel Climate Change Denial Measure Using Item Response Theory 1. Introduction 1.1. Background Despite overwhelming scientific consensus that climate change is occurring, and is due to human activity (Doran & Zimmerman, 2009; IPCC, 2018), there is still a proportion of the population who deny some or all of the elements of climate change (Capstick & Pidgeon, 2014; Leviston & Walker, 2012; Reser, Bradley, Glendon, Ellul, & Callaghan, 2012). This phenomenon is called ‘climate change denial’ or ‘climate scepticism’ and has been observed in many different contexts (e.g., Capstick & Pidgeon, 2014; Leviston & Walker, 2012; Reser et al., 2012). Public acceptance of the scientific evidence of climate change is important in the transition to a low carbon economy (Poortinga, Spence, Whitmarsh, Capstick, & Pidgeon, 2011), and therefore, climate change denial may have deleterious effects on climate change mitigation efforts. 1.2. Heterogenous rates of climate change denial Varying rates of climate change denial have been measured in the literature. Leviston and Walker (2012) found that 17.2% of participants did not believe that climate change was occurring, with similar rates found by Capstick and Pidgeon (2014). This is in contrast with Hornsey, Fielding, McStay, Reser, and Bradley (2016), who found that 4.2% of participants denied climate change, and Reser et al. (2012), who found denial rates of 6.5%. Other studies have found rates somewhere in between, such as Whitmarsh (2011), who found that 12% of participants agreed with the statement “Climate change is not a real problem”. These studies were undertaken in comparable populations (i.e., Australia & U.K.), making the discrepancies in denial rates quite surprising. This may be a function of true population differences, measurement differences, or a combination of both. It is important to note that these studies used different scales to measure climate change denial. Scales differed 4 in terms of length, question framing, and response options. For example, Leviston & Walker (2012) simply asked participants whether they believed climate change was happening, with a dichotomous yes/no response option, then asking what they thought was causing climate change, with four response options (e.g., “I think that climate change is happening, but it’s just a natural fluctuation in Earth’s temperatures”). This is in contrast with Reser et al. (2012), who used four questions and a combination of different response options, such as yes/ no/don’t know, and Likert scales; and Whitmarsh (2011) who asked participants to rate 12 statements on a 5-point Likert scale. Other measures include questions that appear to be double-barrelled, for example, “Even if we do experience some consequences from climate change, we will be able to cope with them” (Capstick & Pidgeon, 2014). The way this particular question is framed asks the reader to imagine that climate change is real, which may cause inconsistent responding for those who deny climate change. Questions such as these may not be measuring the construct of climate change denial reliably. 1.3. Impact of question framing on responses The measures mentioned in section 1.2 are a small sample of all existing climate change denial measures, with most studies either creating their own measure, or amending an existing one. This has led to a proliferation of climate change denial measures, and the absence of a standard measure may partly explain the differences in observed denial rates between studies. Greenhill, Leviston, Leonard, and Walker (2014) found that differences in question framing and response options affected belief responses. When asked about the causes of climate change, and given options including “both natural and anthropogenic”, the majority of people chose this option. However, when not given this option, participants were split down the middle, choosing either natural or anthropogenic. Differences such as this make it difficult to effectively compare denial rates between studies. Additionally, Leviston, Leitch, Greenhill, Leonard, and Walker (2011) found that when given more nuanced options 5 for causation beliefs (e.g., “partly human and partly natural”, “mainly natural” etc.), participants were less likely to endorse ‘natural fluctuation’ than ‘human induced’ answers, compared with when given dichotomous response options (natural or human induced). These findings indicate that when measuring climate change denial, subtle variations in items and response options can have significant impacts on observed denial rates. 1.4. Importance of reliable measurement It is important to have accurate and consistent measurement of public beliefs. Governments, industries, and organisations observe public sentiment to assist decision- making on important issues such as climate change (Reser et al., 2012). Another important reason for consistent measurement is in gauging the efficacy of interventions that aim to reduce climate change denial. Research has shown mixed evidence for intervention effectiveness (e.g., Benegal & Scruggs, 2018; Hart & Nisbet, 2011; McCright, Charters, Dentzman, & Dietz, 2016). However, as all of these studies measured denial differently, the variation in denial rates may be partly due to disparity of sensitivity between measures. Related to this, the absence of a standard scale or measure makes it difficult to integrate research findings and compare interventions. Therefore, it is important to have a consistent, reliable measure of climate change denial in order to improve the ability to integrate and compare research findings. However, the test theory that a measure is built on can impact the adaptability, reliability, and efficiency of the measure. 1.5. Differences in test frameworks Most psychological measures are founded on one of two dominant test theories: Classical Test Theory (CTT) or Item Response Theory (IRT). CTT is the underpinning of existing denial measures, and is based on the assumption that an individual’s observed score is a combination of their true score plus some amount of error (de Champlain, 2010). The advantages of CTT are that it has relatively weak assumptions that are easily met, and is 6 designed in such a way that enables simple calculation of summary scores (de Champlain, 2010). However, CTT is a ‘test-dependent’ theory - that is, only the total score of the test can be interpreted, rather than responses to individual items (Bortolotti, Tezza, de Andrade, Bornia, & de Sousa Júnior, 2013). Generally, responses to a number of items are summed, and the total score is compared to a pre-determined cut-off. The reliance on the total score means that long tests are often required in order to attain an acceptable level of reliability, and additionally, missing response data can be difficult to process. Furthermore, when calculating summary scores using CTT, each individual item is given equal weighting. For example, an item in a climate change denial measure might only be answered affirmatively by individuals who score in the top 5% of climate change