<<

https://doi.org/10.24839/2325-7342.JN23.2.98

Self-Esteem, Self-Disclosure, Self-Expression, and Connection on : A Collaborative Replication Meta-Analysis Dana C. Leighton*, Southern Arkansas University; Nicole Legate*, Illinois Institute of Technology; Sara LePine, Gordon College; Samantha F. Anderson, University of Notre Dame; Jon Grahe*, Pacific Lutheran University

ABSTRACT. This replication meta-analysis explored the robustness of a highly cited study showing that those with low self-esteem perceived benefits for self-disclosure through Facebook compared to face-to-face interactions (i.e., Forest & Wood, 2012, Study 1). Seven preregistered direct replication attempts of this study were conducted by research teams as part of the Collaborative Replication and Education Project (CREP), and results were meta-analyzed to better understand the strength and consistency of the effects reported in the original study. Half of the original results were clearly supported: Self-esteem negatively predicted perceived safety of self-disclosure on Facebook as compared to face-to-face interactions (meta-analytic effect size = -.28, original effect size = -.31), and self-esteem did not relate to perceived opportunities for self-expression; across the 7 replications, all 95% confidence intervals (CIs) for effect sizes included 0. However, 2 other findings received less support: Self-esteem only weakly and inconsistently predicted perceived advantages of self-disclosure on Facebook (meta analytic effect size = -.16, original effect size = -.30), and contrary to the original study, there was no evidence for self-esteem predicting perceived opportunities for connection with others on Facebook (6 of Open Data, Open Materials, Preregistration, the 7 replication effect size CIs contained 0). The results and Replications badges provided further evidence regarding the original study’s earned for transparent research practices. generalizability and robustness. The implications of the Data and materials are available at https:// research and its relevance to social compensation theory is osf.io/yb9cv/. Links to presented, and considerations for future multisite replications Preregistrations for each project are available at are proposed. https://osf.io/yb9cv/.

mong sites, Facebook is a Indeed, since its advent in 2004, scholars have dominant platform that affects the thinking, published over a thousand articles on psychological A emotions, behavior, and interactions of its issues related to Facebook. active users, some two billion people worldwide, Forest and Wood (2012) provided one of the SPECIAL ISSUE 2018 including 70% of the U.S. population (Facebook, first and most highly cited psychological examina- 2017; Fiegerman, 2017; Kemp, 2017). It is therefore tions of Facebook use and psychosocial factors. As PSI CHI JOURNAL OF important that psychologists better understand how of December 2017, ’s Scopus citation met- PSYCHOLOGICAL Facebook use is related to psychosocial factors. rics show the article has been cited 145 times (12.17 RESEARCH

98 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) *Faculty mentor Leighton, Legate, LePine, Anderson, and Grahe | Facebook and Self-Esteem Meta-Analysis times the average for similar articles), putting it Increasingly, replications are seen as a mechanism in the 99th percentile of citations for psychology to improve our scientific enterprise (Edlund, 2016). articles in the previous 18 months. Forest and Wood and technology have increasingly speculated that Facebook had benefits for people been used to conduct large-scale crowd-sourced with low self-esteem who might otherwise have replications, including the Reproducibility Project difficulty with face-to-face (FTF) interactions, and for Psychology (Open Science Collaboration, 2015), hypothesized that people with low self-esteem would whereby 200 researchers replicated 100 studies perceive Facebook as a safer place to self-disclose, published in three top journals in 2008, as well as and a better place to express their emotions with the various Many Labs projects (Many Labs 1–5; others as compared to FTF interactions. Study Ebersole et al., 2016; Ebersole et al., 2017; Klein 1 provided mixed support for their hypothesis: et al., 2014; Klein, Ebersole, et al., 2017; Klein, Compared with higher self-esteem individuals, Vianello, et al., 2017). CREP developed to address those lower in self-esteem saw Facebook as a safer the replication crisis by capitalizing on research and more advantageous place for self-disclosure projects completed by undergraduate psychology as compared to FTF interactions (Forest & Wood, majors in methods and capstone courses1 (Grahe 2012, Study 1), but self-esteem did not predict et al., 2012; Grahe & Hauhart, 2013; Hauhart & participants’ perceptions of Facebook as offering Grahe, 2010). These projects reflect major advances greater opportunities for self-expression. Thus, the in replication methodology and meta-science by authors concluded that people with low self-esteem going beyond single “episodes” of replications, might prefer Facebook over FTF interactions as a taking better account of the “historical track record” means to improve their social relations. of scientific theories (Faust & Meehl, 2002, p. 2). Given the potential implications of these find- The CREP compiles replication efforts from ings and their high impact, several research teams independent teams of undergraduate researchers at U.S. colleges and universities replicated the study across the United States. Projects are peer-reviewed as part of the Collaborative Research and Education before the data collection begins to ensure fidelity Project (CREP; Grahe et al., 2016; https://osf.io/ to the original procedures. Data and results are wfc6u/). CREP was designed to verify research made publically available on the Open Science findings that are highly cited in top journals by Framework (OSF) repository, and a researcher conducting large-scale replications, and to give might include the results of individual replications valuable research experience to undergraduate into a meta-analysis as Calin-Jageman, Lehmann, psychology students. Thus, as they complete their and Elliott (2017) are doing with both published courses, students have become producers of sci- and CREP samples. ence rather than merely consumers. The present There have been debates about the “proper” research presented a meta-analysis of these studies way to conduct replications. For example, one and, to our knowledge, was the first published debate distinguished between direct (also called meta-analysis of a CREP replication and the first “close”) and conceptual replications (Brandt et al. published replication of Forest and Wood (2012; 2014; Simons, 2014; Stroebe & Strack, 2014). Direct Study 1). Moreover, the novelty of the statistical replications approximate the exact conditions of methods used to assess the replicability of Forest the original study to report the degree of similarity and Wood (2012) made the present research an between the original research findings and the instrumental contribution to replication science. replication(s). Conceptual replications alter one or more aspects of the study to extend the original Replications as a Tool for conclusions. All studies reported in this research Generalizability and Meta-Science were direct replications of the original. Social psychology is currently recovering from a Beyond the degree of match between the origi- crisis of confidence in published findings (Earp nal and replication, the goals of replication also & Trafimow, 2015; Edlund, 2016; Pashler & vary across studies. Anderson and Maxwell (2016) Wagenmakers, 2012; Stroebe, 2016) that arose identified six goals of replications: (a) to infer the from concerns about rampant publication bias existence of an effect, (b) to infer the null effect, (c) (Ioannidis, 2005), under-reporting of researcher to accurately estimate the effect size, (d) to combine SPECIAL ISSUE 2018 degrees of freedom (John, Lowenstein, & Prelec, 1It is important to note that the CREP is not the only PSI CHI 2012), and revelations of scientific misconduct meta-science project for undergraduate research (see e.g., JOURNAL OF (Earp & Trafimow, 2015), among other factors. Registered Reports and Pipeline projects; Grahe et al., 2016). PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 99 Facebook and Self-Esteem Meta-Analysis | Leighton, Legate, LePine, Anderson, and Grahe

the results between replication and original, (e) to mentors followed the same methodology, with slight determine if the replication and original provide deviations. inconsistent results (disparity of effect sizes), and (f) to determine if the replication and original Method provide consistent results (equivalence of effect The General CREP Procedure sizes). In their content analyses of 50 studies, there The CREP Advisory Board (see https://osf.io/ was a clear preference for Goal 1, to infer the zt4k5/) selected high-impact studies for replica- existence of an effect, but they recommended that tion. Specifically, the board selected the top three researchers broaden their definition of replication or four most-cited articles from the top journal in by recognizing these various goals in planning and each of nine subdisciplines from the prior three reporting replication studies. years. Studies were selected based on high feasibility Because of the nature of our replication stud- for undergraduate researchers. Forest and Wood’s ies, we were most interested in addressing Anderson (2012) Study 1 was selected for this reason by the & Maxwell’s (2016) Goals 1 and 5, and a modified CREP team in 2015. Authors of selected articles version of Goals 3 and 4. We now describe these received a standard letter informing them of the goals as applied to the current study in more detail. purpose of the CREP project and how their study In addition to traditional significance tests on the was selected. Authors were able to provide further focal effects in the replication studies (Goal 1), we suggestions to the project leaders before studies tested whether the effect size from each replication were published for replication. In the present study was statistically different from the original research, the authors provided no further sugges- (Goal 5). We did not develop these studies expect- tions for the replication. ing a null finding, so Goals 2 and 6 are not presently A list of selected studies was published on the applicable. Consistent with Goal 3, we reported CREP project page (see https://osf.io/flaue/). confidence intervals (CIs) around our effect sizes Potential contributors (most often undergraduates, to accuracy, but fully committing to Goal but also their faculty mentors) reviewed the list 3 would require a different form of sample size and informed project leaders of their intention to planning (accuracy in parameter estimation; see replicate as either Direct replications (matching the Maxwell, Kelley, & Rausch, 2008). Further, because procedures of the original study as closely as pos- we had the advantage of presenting findings from sible) or Direct-Plus replications (where additional several replication studies, rather than a single dependent variables can be measured after the replication, we conducted a modified version of original procedure is complete). Goal 4, wherein we meta-analytically combined Contributors and their faculty mentors (whose the seven replication study results rather than function was to guide contributors and ensure high combining a single replication with the original. quality procedures for data collection and analyses) By using these cutting-edge methods to evaluate compiled evidence of their ability to conduct the the CREP replications, our results provided a more study (completed study materials, a video of the complete picture of replication than the majority experimental procedure, and evidence of institu- of previously published replication studies, many tional review board approval). Contributors’ project of which used only a simple significance test on a materials were reviewed by a CREP team consisting single replication. of three faculty reviewers (Executive Reviewer, plus two reviewers) and a student administrative advisor. The Present Replication Meta-Analysis After receiving approval of the replication materi- In the present investigation, we meta-analyzed als and procedure, contributors preregistered replications of Forest & Wood’s (2012) Study 1 their studies before data collection. The CREP that were retrieved from CREP contributions with team specified the minimum sample size required published data and/or results on the CREP project for the replication; this size was generally set at pages. These CREP research teams represented the original study’s sample size, which was small five diverse institutions, both public and private, enough to be feasible for undergraduate student in urban, suburban, and rural settings in the researchers, but set at 100 for larger studies. All SPECIAL ISSUE 2018 eastern, midwestern, and western United States, studies in this meta-analysis received this approval both teaching-focused and research-intensive, and by the CREP team, collected the minimum number PSI CHI JOURNAL OF from fewer than 2,000 to over 50,000 students. The of participants specified by the CREP (80 for the PSYCHOLOGICAL teams of 2–9 undergraduate students and faculty present study because this was the sample size in RESEARCH

100 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) Leighton, Legate, LePine, Anderson, and Grahe | Facebook and Self-Esteem Meta-Analysis the original study), posted their raw dataset, and disclosing on Facebook versus FTF. All items were reported a summary of results. rated on a scale ranging from 1 to 7 with higher scores representing greater perceived expression, Participants in and Procedures of Forest connection, safety, and advantages. Subscale scores and Wood (2012) CREP Replications showed adequate to good internal consistency Participants in all studies were undergraduate (expression: α ranged from .70 to .80; connection: students at the five universities and colleges in the .68 to .84; safety: .84 to .91; advantages: .89 to .94; United States where the replications were conducted these compare to .88, .72, .87, and .93, respectively, from February 2016 to June 2017 (see Table 1 for in the original study). a list of the participating institutions). Sample sizes Some research teams chose a Direct-Plus ranged from 80 to 374. Five participants who were replication and included additional personality-type not Facebook users were excluded from analyses. measures following the primary measures. Because The average age in the seven datasets ranged they followed the primary measures, and the results from 19.3 to 26.2 (SDs ranging from 1.4 to 9.7), are not relevant to the present research, they are compared with 21.35 years for the original study. not presented here. Across datasets, 67% of all participants were women As the original research did not specify whether and 33% were men, compared with 73% and 21% participants completed questionnaires in a lab in the original study (6% were “undisclosed”).2 or online, researchers employed both methods In three replications (BYU-Wi17, IIT-Sp16, and (three replications had participants complete IIT-Sp17), race data was collected, but in all other in lab only, two used online only, and two used studies no other demographic information was both methods). Participants completing ques- collected because the original Forest and Wood tionnaires in the lab did so alone, except for (2012) research did not indicate that any other the Southern Arkansas University replication, demographic information was collected (see Table which ran participants in groups of two to six. 2 for demographic statistics). Two other known differences across replications After informed consent, participants completed involved minor discrepancies in presentation of the the 10-item Rosenberg Self-Esteem Scale (Rosen- Rosenberg Self-Esteem Scale: (a) one study (IIT- berg, 1965), which assesses global self-esteem on a Sp16) had slight differences in item order and in scale of 1 to 4 (1 to 5 in two of the studies). Research wording on 4 out of 10 items (e.g., ‘At times I think teams combined all items such that higher scores I am no good at all’ vs. ‘I think I am no good at all’), indicated higher self-esteem. The score showed and (b) most replications used the original four- good reliability across datasets (α coefficients range point scale, although two replications (IIT-Sp16 and from .88 to .91).3 The composite self-esteem score IIT-Sp17) used a five-point scale. Three studies were was mean centered in accord with the Forest and Direct-Plus replications that included measures of Wood (2012) analysis. personality, presented after participants completed Next, participants completed a measure of self-esteem and Facebook perception measures for Facebook use perceptions created by the original the replication, in order to test novel hypotheses researchers (Forest & Wood, 2012). Three items (see https://osf.io/726nx/ for a spreadsheet listing various study characteristics). These –Plus measures each assessed the degree to which participants were not included in the present analyses although believed Facebook enabled them to express them- interested researchers can find the original data at selves and to connect with others. Perceived safety the CREP OSF project (see https://osf.io/pcmq5/ of self-disclosure on Facebook compared to in- forks/ for links). person interactions was measured with nine items. After completion of data collection, teams Ten more items assessed perceived advantages of independently analyzed the results and posted their 2A test of equality of independent proportions revealed no significant difference in female proportions between the materials, data, and results on their project’s OSF original and replication studies. The male difference barely pages. Contributors did not always post analysis reached significance, and fell short of significance when two scripts, so procedures to account for missing data of the five undisclosed in the original study were added to the male proportion. are not known for all studies. Two studies included 3Alpha ranges are based on five datasets because one analysis scripts that showed scores for all scales were SPECIAL ISSUE 2018 dataset did not contain scores on individual items making mean averaged. The means for scores in most studies up composites, and one raw dataset was not available. Also, implied that the items were mean averaged, but the PSI CHI Forest and Wood (2012) did not report alpha coefficients for JOURNAL OF the Rosenberg Self-Esteem Scale. Gordon study’s mean implied items were summed. PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 101 Facebook and Self-Esteem Meta-Analysis | Leighton, Legate, LePine, Anderson, and Grahe

The Present Meta-Analysis t statistics and degrees of freedom for all variables,4 This meta-analysis synthesized seven direct so we chose to calculate effect size r for all four preregistered replications of Forest and Wood outcome variables in the study.5 The replication (2012) Study 1 that were part of the CREP project effect size r statistics were calculated in the same (see https://osf.io/pcmq5). The Forest and Wood way for consistency. One replication study failed (2012) CREP replication project page on the OSF to report regression results, so we calculated the has 17 “forks” representing research teams or regression statistics from the study’s dataset, which institutions that have started the replication process 4For one variable (opportunity to express self), only a (see https://osf.io/pcmq5/forks/). These teams marginal t statistic (< 1.36) and nonsignificance were generated seven studies that met the minimum reported. Effect size r for this variable was inferred by using the mean degrees of freedom from the three other outcome sample size, and posted data and results for the variables, and calculating effect size r using a t statistic of 1.35. study in the OSF repository. All seven studies were 5For the simple linear regressions considered in this included in this meta-analysis. meta-analysis, the standardized beta coefficient is equivalent to effect size r but we used the calculated effect sizes because Effect size extraction. The original Forest one result in Forest & Wood (2012) did not include the and Wood (2012) regression results included standardized beta coefficient.

TABLE 1 Regression Results for Original and Replication Studies Express Connect Safety Advantageous

Study Authors N β p β p β p β p

Original Forest & Wood 80 n/a* .181 -.220 .050 -.310 .005 -.300 .011

BYU-Sp16 Wiggins et al. 90 .183 .085 .119 .264 .083 .437 -.021 .845

BYU-Wi17 Foster et al. 90 .124 .245 .048 .651 -.205 .052 -.193 .069

Gordon Nicholson et al. 80 -.098 .388 -.209 .062 -.335 .002 -.300 .007

IIT-Sp16 Legate et al. 116 .170 .082 .160 .099 -.445 < .001 -.241 .008

IIT-Sp17 Legate et al. 94 .048 .647 .178 .087 -.396 < .001 -.173 .096

MSU Tweten et al. 374 .060 .240 .060 .269 -.280 < .001 -.110 .032

SAU Everett et al. 81 .060 .594 .237 .033 -.303 .006 -.235 .034

Note. Beta coefficients are standardized (consistent with the original research). Effect size r is equivalent to the beta coefficient. BYU = Brigham Young University–Idaho, Rexburg, ID; Gordon = Gordon College, Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR. * The original study did not report a beta coefficient for this variable.

TABLE 2 Descriptive Statistics for All Replication Studies

Gender Race/Ethnicity Predictor & Outcome Mean (SD)

Study N Male Female Other White Hispanic Asian Black Other Self-esteem Express Connect Safety Advantageous BYU-Sp16 90 23 67 n/c n/c n/c n/c n/c 2.29 (0.24) 3.59 (1.30) 3.60 (1.41) 5.60 (1.13) 5.50 (1.13)

BYU-Wi17 90 19 71 70 9 3 1 7 2.93 (0.51) 4.26 (1.27) 4.40 (1.26) 2.30 (1.11) 3.22 (1.38)

Gordon 80 14 66 n/c n/c n/c n/c n/c 50.48 (9.88) 12.15 (4.08) 11.76 (3.96) 16.26 (8.39) 22.18 (11.72)

IIT-Sp16 116 78 38 42 n/c 39 13 21 3.78 (0.75) 3.94 (1.48) 3.90 (1.56) 2.53 (1.17) 3.52 (1.38)

IIT-Sp17 94 47 47 22 n/c 53 7 12 3.81 (0.87) 3.96 (1.54) 3.81 (1.52) 2.47 (1.28) 3.08 (1.28)

MSU 374 112 260 2 n/a 3.88 (0.68) 4.31 (1.42) 4.42 (1.46) 2.15 (1.06) 2.87 (1.40) SPECIAL ISSUE 2018 SAU 81 14 67 n/c n/c n/c n/c n/c 3.06 (0.54) 4.97 (1.25) 4.67 (1.36) 2.39 (0.98) 3.21 (1.16) PSI CHI Note. n/a = Data not available. n/c = Data not collected. BYU = Brigham Young University–Idaho, Rexburg, ID; Gordon = Gordon College, Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; JOURNAL OF MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR. PSYCHOLOGICAL RESEARCH

102 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) Leighton, Legate, LePine, Anderson, and Grahe | Facebook and Self-Esteem Meta-Analysis was made openly available on OSF. Effect size Results r represented the size and direction of the relation- The original and replication study regression results ship of self-esteem scores to the four outcome are presented in Table 1. For each of the four variables (opportunity to express self, opportunity outcome variables, we present the standardized beta to connect with others, safety of self-disclosure, and and p value for the regressions. Table 2 shows each advantageous self-disclosure). The effect sizes had a study’s descriptive statistics for gender, race, and the positive sign if self-esteem was positively associated mean and standard deviation for self-esteem and with the outcome variable and a negative sign if the each of the four outcome variables. Table 3 shows relationship was negative. the effect size differences between the original and Meta-analytic method. Each study was a direct replication studies and their 95% CIs. replication of the original, so the materials and methods were very close to the original. However, TABLE 3 methodological variations among the studies (e.g., collecting data on computers vs. paper-and- Replication vs. Original Effect Size r Comparisons pencil) and heterogeniety of the host institution Study Replication r Original r Difference 95% CI population led us to use a random- rather than Advantageous fixed-effects model (Borenstein, Hedges, Higgins, BYU-Sp16 -0.02 -0.33 -0.31 [ -0.59 , -0.01 ] & Rothstein, 2009). Analyses were conducted using BYU-Wi17 -0.19 -0.33 -0.14 [ -0.42 , 0.15 ] the R package metafor (Viechtbauer, 2010). The Gordon -0.30 -0.33 -0.03 [ -0.31 , 0.25 ] rma function was used, which accepts the effect IIT-Sp16 -0.24 -0.33 -0.09 [ -0.35 , 0.18 ] size r and N parameters (syntax is available on the IIT-Sp17 -0.17 -0.33 -0.16 [ -0.43 , 0.12 ] OSF page https://osf.io/r659c/). We chose to MSU -0.11 -0.33 -0.22 [ -0.43 , 0.01 ] use the restricted maximum likelihood (REML) SAU -0.24 -0.33 -0.09 [ -0.37 , 0.20] model because it has been shown to exhibit less bias than the more commonly used DerSimonian and Connect Laird (DL) model with small numbers of studies BYU-Sp16 0.12 -0.22 -0.34 [ -0.62 , -0.04 ] (Veroniki et al., 2016). The rma function converted BYU-Wi17 0.05 -0.22 -0.27 [ -0.56 , 0.03 ] effect size r to Fisher’s z for the calculations, and Gordon -0.21 -0.22 -0.01 [ -0.31 , 0.29 ] we converted z back to effect size r for reporting of IIT-Sp16 0.16 -0.22 -0.38 [ -0.64 , -0.09 ] results (Borenstein et al., 2009). The function also IIT-Sp17 0.18 -0.22 -0.40 [ -0.67 , -0.10 ] reports heterogeneity statistics Q, T 2, and I 2. Finally, MSU 0.06 -0.22 -0.28 [ -0.50 , -0.04 ] we conducted a fail-safe number (FSN) analysis SAU 0.24 -0.22 -0.46 [ -0.74 , -0.15 ] (Rosenberg, 2005) using the fsn function. The Safety FSN analysis was intended to identify the number BYU-Sp16 0.08 -0.31 -0.39 [ -0.66 , -0.09 ] of nonsignificant, unreported studies required to BYU-Wi17 -0.21 -0.31 -0.10 [ -0.38 , 0.19 ] add to the meta-analysis to reduce a significant Gordon -0.34 -0.31 0.03 [ -0.25 , 0.31 ] meta-analytic result to nonsignificance. IIT-Sp16 -0.45 -0.31 0.14 [ -0.10 , 0.39 ] We also chose to analyze the effect size dif- IIT-Sp17 -0.40 -0.31 0.09 [ -0.17 , 0.35 ] ferences between the original study and each of the replications. Anderson and Maxwell (2016) MSU -0.28 -0.31 -0.03 [ -0.24 , 0.20 ] proposed this as one of the goals of replication SAU -0.30 -0.31 -0.01 [ -0.29 , 0.27 ] analysis (Goal 5, To assess whether replication is Express clearly inconsistent with original) and provided BYU-Sp16 0.18 0.15 -0.03 [ -0.32 , 0.26 ] a statistical method for this goal. We adapted the BYU-Wi17 0.12 0.15 0.03 [ -0.27 , 0.33 ] method to use effect size r to estimate 95% CIs for Gordon -0.10 0.15 0.25 [ -0.06 , 0.55 ] the difference in effect sizes for each of the replica- IIT-Sp16 0.17 0.15 -0.02 [ -0.30 , 0.26 ] tions compared with the original study (Anderson IIT-Sp17 0.05 0.15 0.10 [ -0.20 , 0.39 ] & Maxwell, 2016; Bonett, 2008; syntax is available MSU 0.06 0.15 0.09 [ -0.15 , 0.32 ] on the OSF, https://osf.io/r659c/). These CIs SAU 0.06 0.15 0.09 [ -0.22 , 0.39 ] SPECIAL ISSUE 2018 informed whether the replication effect sizes, r, Note. BYU = Brigham Young University–Idaho, Rexburg, ID; Gordon = Gordon College, PSI CHI were significantly larger or smaller than the original Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR. JOURNAL OF study’s results. PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 103 Facebook and Self-Esteem Meta-Analysis | Leighton, Legate, LePine, Anderson, and Grahe

Opportunity to Express Self studies were not significantly different than zero.6 Consistent with the original study, all seven replica- Even though the effect size meta-analysis revealed tions showed that self-esteem was not a significant that the point-estimate meta-analytic effect size of predictor of participants seeing Facebook as the seven replication studies, r = .08, p = .021, was enabling them to express themselves. Five of the statistically different from zero, and the 95% CI and prediction interval (PI) indicated that future seven studies showed an effect size descriptively studies would show an effect size between .01 smaller than the original study (.06 to .12 vs. and .14 (smaller than the original), all individual .15), but all 95% CIs for the effect size difference replications found regression coefficients not included zero, indicating that the differences in significantly different from zero. A forest plot of the magnitude between the original and replication effect size estimates is presented in Figure 1. The between-study variance in the replication studies FIGURE 1 did not appear to be heterogenous,7 Q(df = 6) = 4.75, p = .576; T 2 = 0, 95% CI [0, .03]; I 2 = 0%, 95% Study Weight Effect Size r[95% CI] CI [0%, 79%]. The FSN was three studies. Effect size differences between the replications and the BYU–Sp16 9.62% 0.18 [ -0.03, 0.37] original study were small. BYU–Wi17 9.62% 0.12 [ -0.09, 0.32] Opportunity to Connect With Others Gordon 8.52% -0.10 [ -0.31, 0.12] Contrary to the results of the original study, six IIT–Sp16 12.50% 0.17 [ -0.01, 0.34] of the seven replication studies did not find self- IIT–Sp17 10.07% 0.05 [ -0.15, 0.25] esteem to be a significant predictor of opportunity MSU 41.04% 0.06 [ -0.04, 0.16] to connect. It is worth noting that the original study SAU 8.63% 0.06 [ -0.16, 0.27] obtained a p value of .05, and p values between .04 Summary Effect Size 0.08 [ -0.01, 0.14] and .05 have been demonstrated to be relatively more likely in the case of a true null hypothesis than a false one (Lakens & Evers, 2014; Masicampo & -0.6 -0.34 0 0.34 0.6 Lalande, 2012). In all but one of the replications, Forest plot for meta-analysis of CREP replication studies showing effect sizes, weights, and 95% confidence intervals. the direction of the effect was opposite of the The effect sizes represent the Opportunity to Express Self as predicted by self-esteem. Signs represent the direction of the relationship. Summary effect size includes prediction interval represented by a dashed line. BYU = Brigham Young original study. Two of the replications found an University–Idaho, Rexburg, ID; Gordon = Gordon College, Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; effect size descriptively similar to the original study, MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR. with one of these in the opposite direction (-.21 to .24 vs. -.22); the remainder found effect sizes FIGURE 2 descriptively smaller than the original. Further, these descriptive differences were generally statisti- Study Weight Effect Size r[95% CI] cally significant: Five of the seven replications had effect sizes significantly smaller from the original BYU–Sp16 12.45% 0.18 [ -0.09, 0.32] study, as evidenced by 95% CIs for the difference BYU–Wi17 12.45% 0.05 [ -0.16, 0.25] in effect sizes that did not include zero. The point-estimate of the meta-analytic effect size was Gordon 11.49% -0.21 [ -0.41, 0.01] .09, 95% CI [-.01, .18], 95% PI [-.09, .26], p = .065. A IIT–Sp16 14.62% 0.16 [ -0.02, 0.33] forest plot is presented in Figure 2. The replication IIT–Sp17 12.81% 0.18 [ -0.02, 0.37] between-study variance showed limited evidence of MSU 24.60% 0.06 [ -0.04, 0.16] 6Because of the relatively small sample sizes, the effect SAU 11.59% 0.24 [ 0.02, 0.44] size difference CIs are quite wide, and so these results are interpreted with caution. We further note that these Summary Effect Size 0.09 [ -0.01, 0.18] nonsignificant effect size differences are not sufficient evidence for effect size equivalence between the original and replication studies (Anderson & Maxwell, 2016; Goal -0.6 -0.34 0 0.34 0.6 6), which would require a stronger burden of proof (an equivalence test) than non-significance implies. Forest plot for meta-analysis of CREP replication studies showing effect sizes, weights, and 95% confidence intervals. The 7Heterogeneity statistics show considerable bias with effect sizes represent the Opportunity to Connect with Others as predicted by self-esteem. Signs represent the direction of small numbers of studies, so should be done with caution the relationship. Summary effect size includes prediction interval represented by a dashed line. BYU = Brigham Young (Borenstein et al., 2009; von Hippel, 2015). We present University–Idaho, Rexburg, ID; Gordon = Gordon College, Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; heterogeneity statistics, but refrain from making conclusive MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR. interpretation.

104 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) Leighton, Legate, LePine, Anderson, and Grahe | Facebook and Self-Esteem Meta-Analysis

heterogeniety, Q(df = 6) = 10.79, p = .095; T 2 = .006, the need for further testing. 95% CI [0, .09]; I 2 = 44.39%, 95% CI [0%, 91.69%]. The replications supported the original study’s The FSN was 5 studies. finding that self-esteem was not a significant predictor of perceptions that Facebook provided Self-Disclosure Safety opportunities to express the self, with all studies Consistent with the original study’s findings, five of having 95% CIs that include zero. The effect sizes the seven replications found self-esteem to be a sig- were mostly smaller than the original, the sum- nificant predictor of perceptions that self-disclosure mary effect size was about half of the original’s on Facebook was safer than in FTF interactions. effect size, and the PI [.01, .14] showed that 95% Three of these five had effect sizes descriptively similar in size and direction as the original (-.28 of future studies would find an effect size smaller to -.34 vs. -.31), and two had descriptively larger effect sizes (-.40 to -.45). However, these descrip- FIGURE 3 tive differences in magnitude were generally not statistically significant: Six of the seven replications Study Weight Effect Size r[95% CI] had effect size differences from the original that were not significantly different from zero. The BYU–Sp16 13.55% 0.08 [ -0.13, 0.28] point-estimate of the meta-analytic effect size was BYU–Wi17 13.55% -0.21 [ -0.40, -0.00] -.28, 95% CI [-.40, -.15], 95% PI [-.54, -.03], p < .001. Gordon 12.97% -0.34 [ -0.52, -0.13] The results are presented as a forest plot in Figure IIT–Sp16 14.72% -0.45 [ -0.58, -0.29] 3. The between-study variance showed some evi- IIT–Sp17 13.76% -0.40 [ -0.56, -0.21] dence of heterogeniety, Q(df = 6) = 18.69, p = .005; T 2 = .02, 95% CI [0, .15]; I 2 = 71.76%, 95% CI MSU 18.41% -0.28 [ -0.37, -0.18] [27.06%, 94.69%]. The FSN was 133 studies. SAU 13.03% -0.30 [ -0.49, -0.09] Summary Effect Size -0.28 [ -0.40, -0.15] Self-Disclosure Advantages Four of the seven replication studies were consistent -0.6 -0.34 0 0.34 0.6 with the original study and showed self-esteem to be a significant predictor of perceptions that Facebook Forest plot for meta-analysis of CREP replication studies showing effect sizes, weights, and 95% confidence intervals. The effect sizes represent Self-Disclosure Safety as predicted by self-esteem. Signs represent the direction of the relationship. was advantageous for self-disclosure compared to Summary effect size includes prediction interval represented by a dashed line. BYU = Brigham Young University–Idaho, FTF. The effect sizes of all studies were descriptively Rexburg, ID; Gordon = Gordon College, Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR. less than or equal to the original (-.02 to -.30 vs. -.30). Similarly to self-disclosure safety, though, the effect size differences comparing six of the seven FIGURE 4 replications to the original were not significantly different from zero. The point estimate of the meta- Study Weight Effect Size r[95% CI] analytic effect size was -.16, 95% CI [-.23, -.10], 95% PI [-.24, -.08], p < .001. A forest plot is presented BYU–Sp16 10.18% -0.02 [ -0.23, 0.19] in Figure 4. The variance measures showed little BYU–Wi17 10.18% -0.19 [ -0.38, 0.02] evidence of heterogeniety, Q(df = 6) = 5.81, p = .445; Gordon 9.06% -0.30 [ -0.49, -0.09] T 2 = 0, 95% CI [0, .03]; I 2 = 6.08%, 95% CI [0%, 79.94%]. The FSN was 36 studies. IIT–Sp16 13.04% -0.24 [ -0.40, -0.06] IIT–Sp17 10.63% -0.17 [ -0.36, 0.03] Discussion MSU 37.73% -0.11 [ -0.21, -0.01] The current study explored the results of a recent SAU 9.17% -0.24 [ -0.44, -0.02] effort to replicate findings from Forest and Wood’s Summary Effect Size -0.16 [ -0.23, -0.10] (2012, Study 1) study of self-esteem as a predictor of Facebook users’ perceptions of the site’s usefulness -0.6 -0.34 0 0.34 0.6 for self-expression, self-disclosure, and connection with others. The goal of the current research was to Forest plot for meta-analysis of CREP replication studies showing effect sizes, weights, and 95% confidence intervals. The effect sizes represent Self-Disclosure Advantages as predicted by self-esteem. Signs represent the direction of the help better estimate the robustness of the original relationship. Summary effect size includes prediction interval represented by a dashed line. BYU = Brigham Young study’s effects, and we provide mixed evidence in University–Idaho, Rexburg, ID; Gordon = Gordon College, Wenham, MA; IIT = Illinois Institute of Technology, Chicago, IL; favor of the original study’s findings and suggest MSU = Michigan State University, East Lansing, MI; SAU = Southern Arkansas University, Magnolia, AR.

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 105 Facebook and Self-Esteem Meta-Analysis | Leighton, Legate, LePine, Anderson, and Grahe

than the original study’s, all factors indicating the effect size was opposite in direction to the original presence of publication bias in the original study. in six of the seven replications and nonsignificant The PI came close to, but did not include, zero so in all but one. However, the meta-analytic effect it may be that there is an effect, but small enough size was .09 and significantly different from zero. to be difficult to detect. The FSN analysis, which We believe that it will be unlikely that future studies indicated that only five unreported studies could will find an effect similar to the original, supported reduce the results of the meta-analytic effect size p by the 95% PI of [-.07, .27], which includes zero. value to nonsignificant, led us to be confident that These findings also seemed somewhat unstable, as future replications would find the same lack of an shown by the low FSN (11 studies). effect as our replications. In summary, this meta-analysis of the CREP rep- Another finding consistent with the original lications of Forest and Wood’s Study 1 (2012) found was that lower self-esteem was a significant predictor that future researchers can be relatively confident of perceiving Facebook as a safer place to self-dis- of finding that low self-esteem predicts perceptions close than FTF. Most of the replications found this that Facebook is a safer and more advantageous result, although two of the seven studies had effects place to self-disclose than FTF interactions. Our whose CIs included zero. The point estimate of the results also align with the original study’s finding effect size in the replications (-.28) was close to the that self-esteem does not relate to perceiving original (-.31). The 95% PI [-.40, -.15] indicated that opportunities to express one-self on Facebook. future studies would be likely to find such an effect However, we are less confident that self-esteem as well. These results also seemed to be somewhat will be shown to predict perceptions of Facebook robust, as indicated by a high FSN (133). We can as having opportunities to connect with others, as conclude with some confidence that individuals the original study found. with low self-esteem feel safer self-disclosing on Facebook than in FTF interactions, and that the Implications of the Present Research size of this effect is considerable. When choosing what information to self-disclose, The replication studies were more mixed in people engage in a benefits-risk analysis to maxi- their support of the original study’s finding that mize the benefits of self-disclosure (e.g., social self-esteem was a predictor of the perception that support) and minimize the risks (e.g., vulner- Facebook is advantageous for self-disclosure over ability; Bazarova & Choi, 2014). Self-disclosures FTF. Only four of the seven replication studies on Facebook can be made without some of the showed such an effect, with the other three showing immediate indicators of disapproval present in nonsignificant effects. All of the replications showed FTF interactions, and thus increase the tendency effects in the same direction as the original, but to self-disclose (Hollenbaugh & Ferris, 2014). This with widely varying and smaller effect sizes than perspective is supported by the present research. the original (-.02 to -.30 vs. -.33), the latter finding Our meta-analysis confirmed the original study’s consistent with the original study’s larger suscepti- findings that, when compared with individuals bility to publication bias than replication studies. high in self-esteem, those with low self-esteem saw Nevertheless, the point-estimate of the effect size Facebook as a safer and more advantageous way of in the replications of -.16 was significantly different self-disclosing relative to FTF interactions. from zero, and the PI [-.24, -.08] indicated that The present research also provides some sup- future studies are likely to find such an effect. In port for social compensation theory (McKenna addition, the moderately large FSN (36) indicated & Bargh, 2000; Wilson, Gosling, Graham, 2012). that the findings were relatively stable. Future This theory suggests that, for those with lower self- studies seem likely to find that self-esteem would esteem, the anxiety provoked by FTF interactions predict users’ perception that Facebook was more can be somewhat mitigated in online interactions. advantageous for self-disclosure than FTF, but the However, the CREP studies did not include a effect should be smaller in size than the original measure of FTF vs. online anxiety, making this an study’s finding. interesting direction for future research. The replication studies and their meta-analysis Eşkisu, Hoşoğlu, and Rasmussen (2017) SPECIAL ISSUE 2018 seem to contradict the original study’s findings recently suggested that social compensation theory that low self-esteem was a significant predictor of might predict that those with lower self-esteem may PSI CHI JOURNAL OF perceptions that Facebook has opportunities to use Facebook to express themselves and connect PSYCHOLOGICAL connect with others. The summary estimate of the with others rather than the more anxiety-provoking RESEARCH

106 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) Leighton, Legate, LePine, Anderson, and Grahe | Facebook and Self-Esteem Meta-Analysis

FTF. Their study found a significant negative would not have been a consideration for par- relationship (r = -.11) between self-esteem and ticipants in Forest and Wood’s (2012) study. using Facebook for acquaintance-related functions Thus, future researchers should take into account (e.g., being known by others and meeting new changes in usage patterns among the platforms in people), suggesting that those lower in self-esteem designing research on social media usage, motiva- used Facebook for connection and expression. tions, and intrapersonal benefits. This finding runs contrary to the results of our meta-analysis, as our effect size estimates for connec- Limitations and Caveats tion and expression are positive not negative, and A major limitation in the present research is the are weak and inconsistent. This suggests a number relatively small sample of studies in the CREP proj- of possibilities, one of which is that their findings ect. Seven studies provide important data about the may only generalize to Turkish university students, replicability of a finding, but severely limit the infer- possibly due to a move in the United States and ences we feel confident to draw from the data. For United Kingdom away from Facebook and toward example, heterogeneity measures are known to be and among those 13–24 years biased for meta-analysis involving small numbers of olds (eMarketer, 2016, 2017). It is also possible that studies (Borenstein et al., 2009; von Hippel, 2015). more collectivist cultures, like Turkey (Oyserman, Consequently, we are reluctant to draw conclusive Coon, & Kemmelmeier, 2002), with their emphasis inferences from the heterogeneity statistics. on group-based self-perception, may prompt people Another impediment to drawing better con- to use Facebook for different reasons than those in clusions relates to some of the variations in the individualist cultures. In more collectivist cultures, studies. For example, not all studies posted analysis people with lower self-esteem may use Facebook scripts or narratives to help us determine such to connect and express themselves to enhance things as missing data handling, composite score their group membership. In contrast, in an indi- calculations, data exclusions, etc. Data were not vidualist culture, with its emphasis on individual always collected or stored consistently (e.g., race self-concept, people with greater self-esteem may collected in some but not all studies; individual see Facebook as a place to express their individual item-level data were not always stored). These issues qualities and connect with other individuals. These indicate the need for more specific instructions for hypotheses remain to be tested across cultures by contributors that are specific to the study being future researchers. replicated to ensure consistent data handling and Another possible explanation for our find- documentation. ings is that differences between the original and We are also hesitant to propose moderators of replication results might be related to changes in the effects observed in the replication studies even social media usage patterns from 2012 to 2016. though we know there are methodological and By the time the replications occurred in 2016–17, procedural differences between the studies (i.e., in- both Snapchat and Instagram were among a new person vs. online; individual participants vs. groups; crop of social media outlets for US and UK partici- wording on questionnaires; additional measures). pants that challenged Facebook as the preferred Because each of these potential moderating factors social networking site for the age cohort of the occurred in only one or two of the seven replica- participants in this research (eMarketer, 2016, tions, we chose not to explore these issues in more 2017). More specifically, among Facebook, , depth. Future researchers designing coordinated, Instagram, and Snapchat, self-expression was large-scale replication studies may want to design recently found to be a significant predictor of controlled tests of some of these factors. usage only on Instagram and Snapchat among When comparing the effect sizes of the four college-aged participants, and social interaction outcome variables between the original study and (connection in the terminology of Forest and the replication studies, we are constrained in our Wood, 2012) was a greater motivation for use of inference or conclusions by relatively wide CIs. Instagram and Snapchat than Facebook and Twitter This is primarily caused by the small sample size (Alhabash & Ma, 2017). Participants in our replica- of the original study, as well as the rather small tion studies might have been implicitly comparing sample sizes of the replication studies. Generally, SPECIAL ISSUE 2018 Facebook usage to Snapchat and Instagram the sample sizes required for precision in compar- when rating its usefulness for self-expression and PSI CHI ing the difference in effect sizes will be larger than JOURNAL OF connection with others, whereas these platforms for judging simple significance. Future large-scale PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 107 Facebook and Self-Esteem Meta-Analysis | Leighton, Legate, LePine, Anderson, and Grahe

replication attempts should consider increasing Earp, B. D., & Trafimow, D. (2015). Replication, falsification, and the crisis of sample size to attain adequate power and precision confidence in social psychology. Frontiers in Psychology, 6, 1–11. https://doi.org/10.3389/fpsyg.2015.00621 for inferences to be made more confidently (see Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Anderson & Maxwell, 2017 for a comparison of Banks, J. B., . . . Nosek, B. A. (2016). Many Labs 3: Evaluating participant sample size planning for replication studies). pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68–82. https://doi.org/10.1016/j.jesp.2015.10.012 Conclusion Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J., Banks, The awareness that some, if not many, single-study J. B., … Nosek, B. A. (2017, July 17). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Retrieved from results in psychology may not be replicable has https://psyarxiv.com/q4emc spurred a new interest in replication studies. Edlund, J. E. (2016). Let’s do it again: A call for replications in Psi Chi Journal Although single-study replications can help lend of Psychological Research. Psi Chi Journal of Psychological Research, 21, 59–61. https://doi.org/10.24839/2164-8204.jn21.1.59 confidence to our conclusions about the veracity eMarketer. (2016). US social network usage StatPack. Retrieved from https://iab. of the replicated study, the use of larger-scale, mul- si/files/default/eMarketer_Social_Network_Usage_StatPack_2016.pdf tiple, independent replication attempts can provide eMarketer. (2017). Instagram, Snapchat adoption still surging in US and UK. Retrieved from https://emarketer.com/Articles/Print.aspx?R=1016369 more and higher quality data. These data can be Eşkisu, M., Hoşoğlu, R., & Rasmussen, K. (2017). An investigation of the aggregated and used to more accurately estimate relationship between Facebook usage, big five, self-esteem and the effect size likely to be found in future studies. narcissism. Computers in Human Behavior, 69, 294–301. https://doi.org/10.1016/j.chb.2016.12.036 We are encouraged that the emerging Facebook. (2017). Company info–Facebook newsroom. Retrieved from meta-science methods being used by large scale newsroom.fb.com/company-info/ replications such as the Reproducibility Project Faust, D., & Meehl, P. (2002). Using meta-scientific studies to clarify or resolve questions in the philosophy and history of science. Philosophy of Science, for Psychology, Many Labs, and CREP projects will 69(S3), S185–S196. https://doi.org/10.1086/341845 begin to contribute to greater confidence in our Fiegerman, S. (2017, May 3). Facebook tops 1.9 billion monthly users. Retrieved from http://money.cnn.com/2017/05/03/technology/Facebook-earnings/ conclusions from findings that are highly cited index.html in our top journals. Replication is neither a silver Forest, A. L., & Wood, J. V. (2012). When social networking is not working: bullet, nor is it insignificant. It has been called Individuals with low self-esteem recognize but do not reap the benefits of self-disclosure on Facebook. Psychological Science, 23, 295–302. “the demarcation criterion between science and https://doi.org/10.1177/0956797611429709 nonscience” (Braude, 1979, p. 2) and “the coin of Grahe, J., Brandt, M. J., IJzerman, H., Legate, N., Wagge, J., Weisberg, Y. J., & the scientific realm” (Loscalzo, 2012, p. 1211). It Wiggins, B. J. (2016, October 8). Collaborative Replications and Education Project (CREP). Retrieved from http://www.osf.io/wfc6u may not advance science by leaps and bounds, but Grahe, J., & Hauhart, R. (2013). Describing typical capstone course experiences it helps us move incrementally in the direction of from a national random sample. Teaching of Psychology, 40, 281–87. truth. https://doi.org/10.1177/0098628313501040 Grahe, J., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student References research projects. Perspectives on Psychological Science, 7, 605–607 Alhabash, S., & Ma, M. (2017). A tale of four platforms: Motivations and uses https://doi.org/10.1177/1745691612459057 of Facebook, Twitter, Instagram, and Snapchat among college students. Hauhart, R. C., & Grahe, J. (2010). The undergraduate capstone course in the Social Media + Society, 3(1), 1–13. https://doi.org/10.1177/2056305117691544 social sciences: Results from a regional survey. Teaching Sociology, 38, Anderson, S. F., & Maxwell, S. E. (2016). There’s more than one way to conduct a 4–17. https://doi.org/10.1177/0092055X09353884 replication study: Beyond statistical significance. Psychological Methods, Hollenbaugh, E. E., & Ferris, A. L. (2014). Facebook self-disclosure: Examining 21, 1–12. https://doi.org/10.1037/met0000051 the role of traits, social cohesion, and motives. Computers in Human Anderson, S. F., & Maxwell, S. E. (2017). Addressing the ‘replication crisis’: Using Behavior, 30, 50–58. https://doi.org/10.1016/j.chb.2013.07.055 original studies to design replication studies with appropriate statistical Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS power. Multivariate Behavioral Research, 52, 305–324. Medicine 2: e124. https://doi.org/10.1371/journal.pmed.0020124 https://doi.org/10.1080/00273171.2017.1289361 John, L. K., Lowenstein, G., & Prelec, D. (2012). Measuring the prevalence Bazarova, N. N., & Choi, Y. H. (2014). Self-disclosure in social media: Extending of questionable research practices with incentives for truth telling. the functional approach to disclosure motivations and characteristics on Psychological Science, 23, 524–532. social network sites. Journal of Communication, 64, 635–657. https://doi.org/10.1177/0956797611430953 https://doi.org/10.1111/jcom.12106 Kemp, S. (2017). Facebook active users decline, mobile usage hits 5 billion and Bonett, D. G. (2008). Meta-analytic interval estimation for bivariate correlations. more. Retrieved from https://tnw.to/2rhyz7t Psychological Methods, 13, 173–181. https://doi.org/10.1037/a0012868 Klein, R. A., Ebersole, C. R., Cook, C. L., Nosek, B. A., Redford, L., Levitan, C., . Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. (2009). Introduction . . Harton, H. C. (2017). Many Labs 4: Investigating effects of researcher to meta-analysis. Chichester, UK: Wiley. expertise on replication outcomes. Retrieved from https://osf.io/8ccnw Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R., Jr., Bahník, S., Bernstein, M. J., . R., . . . Van’t Veer, A. (2014). The replication recipe: What makes for a . . Cemalcilar, Z. (2014). Data from investigating variation in replicability: convincing replication? Journal of Experimental Social Psychology, 50, A “Many Labs” replication project. Journal of Open Psychology Data, 2, e4. 217–224. https://doi.org/10.1016/j.jesp.2013.10.005 https://doi.org/10.1027/1864-9335/a000178 SPECIAL ISSUE 2018 Braude, S. E. (1979). ESP and psychokinesis: A philosophical examination. Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Jr., Alper, S., . Philadelphia, PA: Temple University Press. . . Friedman, M. (2017). Many Labs 2: Investigating variation in replicability PSI CHI Calin-Jageman, R., Lehmann, G., & Elliott, A. (2017). Romantic red–Meta-analysis across sample and setting. https://doi.org/10.17605/OSF.IO/8CD4R JOURNAL OF on the effect of red on perceived attractiveness. Retrieved from Lakens, D., & Evers, E. R. K. (2014). Sailing from the seas of chaos into the PSYCHOLOGICAL https://osf.io/xy47p/ corridor of stability. Perspectives on Psychological Science, 9, 278–292. RESEARCH

108 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) Leighton, Legate, LePine, Anderson, and Grahe | Facebook and Self-Esteem Meta-Analysis

https://doi.org/10.1177/1745691614528520 meta-analyses. BMC Medical Research Methodology, 15, 35. Loscalzo, J. (2012). Irreproducible experimental results: Causes, (mis) https://doi.org/10.1186/s12874-015-0024-z interpretations, and consequences. Circulation 125, 1211–1214. Wilson, R. E., Gosling, S. D., & Graham, L. T. (2012). A review of Facebook https://doi.org/10.1161/CIRCULATIONAHA.112.098244 research in the social sciences. Perspectives on Psychological Science, 7, Masicampo, E. J., & Lalande, D. R. (2012). A peculiar prevalence of p values just 203–220. https://doi.org/10.1177/1745691612442904 below .05. Quarterly Journal of Experimental Psychology, 65, 2271–2279. https://doi.org/10.1080/17470218.2012.711335 Author Note. Dana C. Leighton, Behavioral and Social Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for Sciences Department, Southern Arkansas University; Nicole statistical power and accuracy in parameter estimation. Annual Review of Legate, Department of Psychology, Illinois Institute of Psychology, 59, 537–563. Technology; Sara LePine, Department of Psychology, Gordon https://doi.org/10.1146/annurev.psych.59.103006.093735 College; Samantha F. Anderson, Department of Psychology, Mckenna, K. Y. A., & Bargh, J. A. (2000). Plan 9 from cyberspace: The University of Notre Dame; Jon Grahe, Department of implications of the Internet for personality and social psychology. Psychology, Pacific Lutheran University. Personality and Social Psychology Review, 4, 57–75. Dana C. Leighton is now at Psychology Department, https://doi.org/10.1207%2FS15327957PSPR0401_6 Texas A&M University–Texarkana. Open Science Collaboration. (2015). Estimating the reproducibility of The authors acknowledge the students and faculty psychological science. Science, 349, aac4716. mentors of the research groups that conducted the https://doi.org/10.1126/science.aac4716 replication studies: Bradford J. Wiggins, Isaac Pfleger, Abigail Oyserman, D., Coon, H. M., & Kemmelmeier, M. (2002). Rethinking individualism Olsen, Kendra Tilley, Katelyn Haar, Victoria Boomhower, and collectivism: Evaluation of theoretical assumptions and meta- Kylie Harris, Jared Roy Foster, Christopher Hooker, Brayden analyses. Psychological Bulletin, 128, 3–72. Decker, Michael Alexander Retallick, and Stephanie Molisi https://doi.org/10.1037//0033-2909.128.1.3 from Brigham Young University, Idaho; Stephanie Nicholson, Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the Kaylee Seward, J.P. Gerber, and Sara LePine from Gordon special section on replicability in psychological science: A crisis of College; Rafal Wojtowicz, Nicole Legate, Zofie Mandelski, confidence? Perspectives on Psychological Science, 7, 528–530. Alexis Renk, Cynthia Jocelyn Chavez, Jacqueline Suriano, https://doi.org/10.1177/1745691612465253 Sonia Kamdar, Jonathan Tacuri, Tasheica Lindsay, Sean Isaac Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Rafajko, Sunny Shah, Adeena Ahmed, Nadyah Mohiuddin, Princeton University Press. Aume Waheed, and Elizabeth Howard from Illinois Institute Rosenberg, M. S. (2005). The file-drawer problem revisited: A general weighted of Technology; Carol Tweten, Richard E. Lucas, Katie method for calculating fail-safe numbers in meta-analysis. Evolution, 59, Solomon, and Jaazaniah Catterall from Michigan State 464–468. https://doi.org/10.1111/j.0014-3820.2005.tb01004.x University; and Delora Everett, Dana C. Leighton, and Megan Simons, D. J. (2014). The value of direct replication. Perspectives on Nicole Glass from Southern Arkansas University. The authors Psychological Science, 9, 76–80. https://doi.org/10.1177/1745691613514755 also acknowledge Psi Chi and the Center for Open Science Stroebe, W. (2016). Are most published social psychological findings false? who provided funding for the CREP Research Awards. Journal of Experimental Social Psychology, 66, 134–144. Special thanks to Psi Chi Journal reviewers for their support. https://doi.org/10.1016/j.jesp.2015.09.017 This manuscript qualifies for an Open Materials badge Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact and an Open Data badge; the materials and data are available replication. Perspectives on Psychological Science, 9, 59–71. at https://osf.io/yb9cv/. This manuscript qualifies for a https://doi.org/10.1177/1745691613514450 Preregistration badge; the data collection and data analysis Veroniki, A. A., Jackson, D., Viechtbauer, W., Bender, R., Bowden, J., Knapp, methods were preregistered at each individual study’s OSF G., . . . Salanti, G. (2016). Methods to estimate the between-study variance page (for links to each project see https://osf.io/yb9cv/). and its uncertainty in meta-analysis. Research Synthesis Methods, 7, This manuscript qualifies for a replication badge; it replicates 55–79. https://doi.org/10.1002/jrsm.1164 a study reported by Forest and Wood (2012). Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor Correspondence regarding this article should be package. Journal of Statistical Software, 36(3), 1–48. addressed to Dana C. Leighton, Psychology Department, https://doi.org/10.18637/jss.v036.i03 Texas A&M University–Texarkana, TX 75503. von Hippel, P. T. (2015). The heterogeneity statistic I(2) can be biased in small E-mail: [email protected]

SPECIAL ISSUE 2018

PSI CHI JOURNAL OF PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 109 ADVERTISEMENT

SPECIAL ISSUE 2018

PSI CHI JOURNAL OF PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 179 ADVERTISEMENT

SPECIAL ISSUE 2018

PSI CHI JOURNAL OF PSYCHOLOGICAL RESEARCH

180 COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) ADVERTISEMENT

Call for Submissions This summer, consider submitting research to authors are welcome, and submissions will remain Psi Chi Journal that is related to help-seeking open for all other areas of psychological research. behavior. Psi Chi is launching a new 2018 initiative, Experience our rigorous, yet supportive and which will establish a toolkit of resources that educational, peer-review process for yourself. encourage people to feel comfortable seeking Our high visibility across the field and dedication help concerning a mental illness, bullying, sexual to transparent, replicable research practices harassment/abuse, tutoring, test taking, etc. makes our journal the place to submit your research today! Will you support the #Help_HelpedMe Initiative by helping us expand Psi Chi’s collection of help- Learn more about the Help_HelpedMe Initiative at seeking articles? As always, student and faculty https://doi.org/10.24839/2325-7342.JN23.1.2

“What if we lived in a world where seeking help was considered as noble as offering help? . . . Let’s work together toward a future where seeking help is universally perceived as a psychological strength.” R. Eric Landrum, PhD Psi Chi President

ADVERTISEMENT

Gain Valuable Research Experience With Psi Chi!

Students and faculty are invited to visit Psi Chi’s free Conducting Research online resource at www.psichi.org/?page=ConductingResearch. Here are three ways to get involved:

Join a Collaborative Recruit Online Participants Explore Our Research Research Project for Your Studies Measures Database https://www.psichi. www.psichi.org/?page= www.psichi.org/?page= org/?page=Res_Opps study_links researchlinksdesc With Psi Chi’s Network for Psi Chi is dedicated to helping This database links to International Collaborative members find participants to various websites featuring Exchange (NICE), you can join their online research studies. research measures, tools, and the CROWD and answer a Submit a title and a brief instruments. You can search for common research question description of your online relevant materials by category with various researchers both studies to our Post a Study Tool. or keyword. If you know of internationally, and nationally. We regularly encourage our additional resources that could You can also CONNECT with a members to participate in all be added, please contact network of researchers open listed studies. [email protected] to collaboration. SPECIAL ISSUE 2018 PSI CHI JOURNAL OF PSYCHOLOGICAL RESEARCH

COPYRIGHT 2018 BY PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY (SPECIAL ISSUE, VOL. 23, NO. 2/ISSN 2325-7342) 181 Publish Your Research in Psi Chi Journal Undergraduate, graduate, and faculty submissions are welcome year round. Only the first author is required to be a Psi Chi member. All submissions are free. Reasons to submit include

• a unique, doctoral-level, peer-review process • indexing in PsycINFO, EBSCO, and Crossref databases • free access of all articles at psichi.org • our efficient online submissions portal View Submission Guidelines and submit your research at www.psichi.org/?page=JN_Submissions

Become a Journal Reviewer Doctoral-level faculty in psychology and related fields who are passionate about educating others on conducting and reporting quality empirical research are invited become reviewers for Psi Chi Journal. Our editorial team is uniquely dedicated to mentorship and promoting professional development of our authors—Please join us!

To become a reviewer, visit www.psichi.org/page/JN_BecomeAReviewer

Resources for Student Research Looking for solid examples of student manuscripts and educational editorials about conducting psychological research? Download as many free articles to share in your classrooms as you would like.

Search past issues, or articles by subject area or author at www.psichi.org/?journal_past

Add Our Journal to Your Library Ask your librarian to store Psi Chi Journal issues in a database at your local institution. Librarians may also e-mail to request notifications when new issues are released.

Contact [email protected] for more information.

Register an account: http://pcj.msubmit.net/cgi-bin/main.plex ®