Press Packet Table of Contents

Total Page:16

File Type:pdf, Size:1020Kb

Press Packet Table of Contents Press Packet Table of COntents 3. Center for Open SCience 4. Founder Biographies 6. What we do 6. Infrastructure Projects 6. Open Science Framework 7. Metascience Projects 7. Reproducibility Project: Psychology 7. Reproducibility Project: Cancer Biology 8. Reproducibility ProJect: Many Labs I 8. Many Labs 8. CREP 8. Croudsourcing a Dataset 9. Community Projects 9. Badges 10. Registered Reports 10. Statistical Consulting 11. PRess Correspondents 14. Our Sponsors 15. Our Logos 2 What is the Center for Open Science? In science, no individual is the However, in the current system, scientists arbiter of truth. Scientific knowledge are often asked to choose between their accumulates by sharing information livelihood and their scientific values. This and reproducing results. Ironically, the dilemma often results in choosing publica- reward structure for scientists does not tion over reproducibility or forcing inter- always provide incentives for individu- esting findings out of insignificant results. als to pursue openness and reproduc- The Center for Open Science seeks to en- ibility. The Center for Open Science able open practices by providing resources aims to change that. and creating incentives for researchers to adopt them. Founded in March 2013 by Brian Nosek and Jeffrey Spies, the Center for Open COS operates in three primary realms. Science (COS) is a non-profit science First and foremost is infrastructure. One and technology startup dedicated to of our main projects is building research increasing the openness, integrity, and management software, the Open Science reproducibility of scientific research. Framework (OSF), which connects all That mission is stages of the re- a daily goal that “In science, no individual search lifecy- drives all of the cle. The second work we do. is the arbiter of truth.” area of focus is COS has three metascience; primary activities: building infrastruc- we study the science of science to gain an ture to support the research workflow understanding of what leads to greater and enable open practices, growing a reproducibility. Our third branch is com- community of scientists and stakehold- munity building; we foster communities ers around open practices, and con- of researchers, journals editors, and other ducting metascience research to better stakeholders around open science practic- understand the state of science while es. All of our efforts support our mission evaluating interventions to improve it. of increasing the openness, integrity,and reproducibility of scientific research. Openness and reproducibility are fun- http://cos.io damental to the advancement of science. 3 ThE Founders: Brian Nosek Co-founder and Director at the Center for Open Science. Nosek received a Ph.D. in Psy- chology from Yale University in 2002 and is a professor in the Department of Psychology at the University of Virginia. He received ear- ly career awards from the International Social Cognition Network (ISCON) and the Society for the Psychological Study of Social Issues (SPSSI). He also co-founded Project Implicit (http://projectimplicit.net/), an Internet-based multi-university collaboration of research and education about thoughts and feelings that ex- ist outside of awareness or control. Jeff Spies Co-founder and CTO of the Center for Open Science. Spies received his PhD in Psychology from the University of Virginia. Working with his doctoral advisor, Brian Nosek, he devel- oped the Open Science Framework, which be- came the first project for the Center for Open Science. Through his management of the Cen- ter, he continues to pursue his ambition of im- proving science by making it more open and accessible. 4 The Three domains of the Center for Open SCience Infrastucture MetaScience Community 5 What WE do: Organizational tools for Infrastructure: scientists and researchers The Open Science Framework (OSF) The COS’ flagship product is the Open tion with additional tools will help OSF Science Framework. The OSF is a free, users streamline workflows and increase open source web application that helps efficiency. scientists manage their entire research workflow and facilitates collaboration. Every user, file, project, and component Customizable privacy settings allow for on the OSF has its own persistent, unique easy sharing of a project between collab- identifier, allowing work to be cited. Ev- orators; sharing with the public is accom- ery public resource has a statistics page, plished with a simple displaying informa- click of a button if and “The COS’ flag- tion about the number of others viewing and when the researchers ship product is are ready to do so. downloading your work. the Open Science One-time sharing of pri- Any project, or compo- Framework.” vate resources is possi- nent of a project, can ble through creation of be made public at the view-only links, most researcher’s discretion, making it easi- helpful for peer review. These links can er for scientists to bring open practices be configured to anonymize the proj- into their work. Integrated version con- ect contributor list in order to maintain trol and logging facilitates good scientific blinded peer review. practice without any added burden on the user. Seamless integration with tools like The OSF and its features are under con- Dropbox, Figshare, and Dataverse allows tinual development. The OSF is open researchers to continue to use their favor- source, so the community is welcome to ite products while gaining added benefit contribute to the effort. from features of the OSF. Future integra- http://osf.io 6 Infrastructure projects Scientific inquiry into the state of Metascience: science itself focusing on reproducibility Reproducibility Project: Psychology The Reproducibility Project: Psychology Science, and Journal of Experimental is a large-scale, collaborative effort to Psychology: Learning, Memory, and understand the reproducibility of a sam- Cognition. ple of studies from the scientific litera- ture. Involving more than 200 scientists The expectation of this effort is that from around the world, individuals or we will learn about: the overall rate of teams of researchers follow a structured reproducibility in a sample of the pub- protocol for designing and conduct- lished psychology literature, obstacles ing a close, high-powered replication that arise in conducting effective rep- of key effects from the selected articles. lications of original study procedures, The investigation samples articles from predictors of replication success, and the 2008 issues of three prominent psy- aspects of a procedure that are or are chology journals: Journal of Personality not critical to a successful direct repli- and Social Psychology, Psychological cation. https://osf.io/ezcuj/wiki/home/ Reproducibility Project: Cancer Biology The Reproducibility Project: Cancer Bi- The experimental protocol, materials, ology, a collaboration between Science data, and results for each replication are Exchange and the Center for Open Sci- openly available on the Open Science ence, is independently replicating 50 Framework. The studies will follow the high-impact cancer biology studies that Registered Reports format, in which peer were published between 2010-2012. review of proposed experimental designs Through independent direct replica- and protocols will be conducted prior to tions, the project aims to identify best data collection, to maintain quality assur- practices, determine how to maximize ance of these replications. Eventually re- reproducibility in the field, and facilitate sults of these Registered Reports will be an accurate accumulation of knowledge. collected and published as a full replica- This will enable impactful novel find- tion study. ings to be trusted and built upon by the https://osf.io/e81xl/wiki/home/ scientific community. MetaScience Projects 7 Many Labs 1, 2, and 3 The Many Labs projects are crowd- variations in samples and settings have sourced metascience studies that little impact on the magnitude of the takes a packet of individual studies effects. Though there were instances and tests them in “many labs”. This where effects were seen. The results of allows conditions, like participant this led to two other Many Labs initia- compensation, to be compared, giv- tives that expanded on the ideas of the ing insight into which conditions first. might lead to a trend in replication Many Labs 1: https://osf.io/abesq/ success. The first project, Many Labs Many Labs 2: https://osf.io/8cd4r/wiki/home/ I, tested a packet of 12 experiments Many Labs 3: https://osf.io/ct89g/wiki/home/ and was completed in 2014. This scientific endeavor suggested that Crowdsourcing CREP a Dataset The goal of the Collaborative Rep- Crowdsourcing Data Analysis is a lications and Education Project method of data analysis in which mul- (CREP) is to facilitate student re- tiple independent analysts investigate search training and solidify research the same research question on the findings in psychological science same data set in whatever manner they through student participation in consider to be best. large-scale replication efforts. https://osf.io/gvm2z/wiki/home/ https://osf.io/wfc6u/wiki/home/ 8 MetaScience Projects Community: building and supporting the Many Labs 1, 2, and 3 open science community Badges for Journals Despite the importance of open com- Currently, Psychological Science, munication for scientific progress, pres- Journal of Social Psychology, Euro- ent norms do not provide strong incen- pean Journal of Personality, and So- tives for individual researchers
Recommended publications
  • Promoting an Open Research Culture
    Promoting an open research culture Brian Nosek University of Virginia -- Center for Open Science http://briannosek.com/ -- http://cos.io/ The McGurk Effect Ba Ba? Da Da? Ga Ga? McGurk & MacDonald, 1976, Nature Adelson, 1995 Adelson, 1995 Norms Counternorms Communality Secrecy Open sharing Closed Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Quality Quantity Anderson, Martinson, & DeVries,
    [Show full text]
  • The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License
    University of Rhode Island DigitalCommons@URI Technical Services Faculty Presentations Technical Services 2017 "Fake Results": The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. Follow this and additional works at: http://digitalcommons.uri.edu/lib_ts_presentations Part of the Scholarly Communication Commons, and the Scholarly Publishing Commons Recommended Citation Rathemacher, Andrée, ""Fake Results": The Reproducibility Crisis in Research and Open Science Solutions" (2017). Technical Services Faculty Presentations. Paper 48. http://digitalcommons.uri.edu/lib_ts_presentations/48http://digitalcommons.uri.edu/lib_ts_presentations/48 This Speech is brought to you for free and open access by the Technical Services at DigitalCommons@URI. It has been accepted for inclusion in Technical Services Faculty Presentations by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. “Fake Results” The Reproducibility Crisis in Research and Open Science Solutions “It can be proven that most claimed research findings are false.” — John P. A. Ioannidis, 2005 Those are the words of John Ioannidis (yo-NEE-dees) in a highly-cited article from 2005. Now based at Stanford University, Ioannidis is a meta-scientist who conducts “research on research” with the goal of making improvements. Sources: Ionnidis, John P. A. “Why Most
    [Show full text]
  • 2020 Impact Report
    Center for Open Science IMPACT REPORT 2020 Maximizing the impact of science together. COS Mission Our mission is to increase the openness, integrity, and reproducibility of research. But we don’t do this alone. COS partners with stakeholders across the research community to advance the infrastructure, methods, norms, incentives, and policies shaping the future of research to achieve the greatest impact on improving credibility and accelerating discovery. Letter from the Executive Director “Show me” not “trust me”: Science doesn’t ask for Science is trustworthy because it does not trust itself. Transparency is a replacement for trust. Transparency fosters self-correction when there are errors trust, it earns trust with transparency. and increases confidence when there are not. The credibility of science has center stage in 2020. A raging pandemic. Partisan Transparency is critical for maintaining science’s credibility and earning public interests. Economic and health consequences. Misinformation everywhere. An trust. The events of 2020 make clear the urgency and potential consequences of amplified desire for certainty on what will happen and how to address it. losing that credibility and trust. In this climate, all public health and economic research will be politicized. All The Center for Open Science is profoundly grateful for all of the collaborators, findings are understood through a political lens. When the findings are against partners, and supporters who have helped advance its mission to increase partisan interests, the scientists are accused of reporting the outcomes they want openness, integrity, and reproducibility of research. Despite the practical, and avoiding the ones they don’t. When the findings are aligned with partisan economic, and health challenges, 2020 was a remarkable year for open science.
    [Show full text]
  • 1 Maximizing the Reproducibility of Your Research Open
    1 Maximizing the Reproducibility of Your Research Open Science Collaboration1 Open Science Collaboration (in press). Maximizing the reproducibility of your research. In S. O. Lilienfeld & I. D. Waldman (Eds.), Psychological Science Under Scrutiny: Recent Challenges and Proposed Solutions. New York, NY: Wiley. Authors’ Note: Preparation of this chapter was supported by the Center for Open Science and by a Veni Grant (016.145.049) awarded to Hans IJzerman. Correspondence can be addressed to Brian Nosek, [email protected]. 1 Alexander A. Aarts, Nuenen, The Netherlands; Frank A. Bosco, Virginia Commonwealth University; Katherine S. Button, University of Bristol; Joshua Carp, Center for Open Science; Susann Fiedler, Max Planck Institut for Research on Collective Goods; James G. Field, Virginia Commonwealth University; Roger Giner-Sorolla, University of Kent; Hans IJzerman, Tilburg University; Melissa Lewis, Center for Open Science; Marcus Munafò, University of Bristol; Brian A. Nosek, University of Virginia; Jason M. Prenoveau, Loyola University Maryland; Jeffrey R. Spies, Center for Open Science 2 Commentators in this book and elsewhere describe evidence that modal scientific practices in design, analysis, and reporting are interfering with the credibility and veracity of the published literature (Begley & Ellis, 2012; Ioannidis, 2005; Miguel et al., 2014; Simmons, Nelson, & Simonsohn, 2011). The reproducibility of published findings is unknown (Open Science Collaboration, 2012a), but concern that is lower than desirable is widespread -
    [Show full text]
  • The Reproducibility Crisis in Scientific Research
    James Madison University JMU Scholarly Commons Senior Honors Projects, 2010-current Honors College Spring 2019 The reproducibility crisis in scientific esearr ch Sarah Eline Follow this and additional works at: https://commons.lib.jmu.edu/honors201019 Part of the Medicine and Health Sciences Commons, and the Statistics and Probability Commons Recommended Citation Eline, Sarah, "The reproducibility crisis in scientific esearr ch" (2019). Senior Honors Projects, 2010-current. 667. https://commons.lib.jmu.edu/honors201019/667 This Thesis is brought to you for free and open access by the Honors College at JMU Scholarly Commons. It has been accepted for inclusion in Senior Honors Projects, 2010-current by an authorized administrator of JMU Scholarly Commons. For more information, please contact [email protected]. Running Head: REPRODUCIBILITY CRISIS 1 The Reproducibility Crisis in Scientific Research Sarah Eline Senior Honors Thesis Running Head: REPRODUCIBILITY CRISIS 2 While evidence-based medicine has its origins well before the 19th century, it was not until the beginning of the 1990s that it began dominating the field of science. Evidence-based practice is defined as “the conscientious and judicious use of current best evidence in conjunction with clinical expertise and patient values to guide health care decisions” (Titler, 2008, para. 3). In 1992, only two journal articles mentioned the phrase evidence-based medicine; however just five years later, that number rose to over 1000. In a very short period of time, evidence-based medicine had evolved to become synonymous with the practices that encompassed the medical field (Sackett, 1996). With evidence-based medicine came a decline in qualitative research and a shift towards quantitative research.
    [Show full text]
  • Using the OSF (Open Science Framework)
    Using the OSF Center for Open Science at BITSS 2014 Johanna Cohoon & Caner Uguz Pull out your laptop and visit www.osf.io Anderson, Martinson, & DeVries, 2007 NORMS COUNTERNORMS Communality Secrecy Open sharing Closed Universalism Particularism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Quality Quantity COUNTERNORMS NORMS Incentives Incentives for individual success are focused on getting it published, not getting it right. Change the incentives. Center for Open Science COMMUNITY INFRASTRUCTURE METASCIENCE Community ■ Foster discussion of transparency and reproducibility issues ■ Support community efforts to increase openness in the sciences ■ Promote the establishment of reporting standards Center for Open Science COMMUNITY INFRASTRUCTURE METASCIENCE Infrastructure ■ Tools for scientists that increase transparency and access to resources ■ Connect services and allow users to easily organize and share their work Center for Open Science COMMUNITY INFRASTRUCTURE METASCIENCE Metascience ■ How are scientists currently behaving? ■ Do transparency and replication have a positive impact on research? ■ Reproducibility Project: Psychology and Cancer Biology; Many Labs 1, 2, and 3; Archival Project; CREP Open your laptop! www.osf.io OSF Nudge the incentive system ■ Support pre-registration ■ Create alternative means of measuring impact ■ Enable sharing of materials Facilitate good research ■ Organize files ■ Allow collaboration meanwhile Logs Commenting Tags Statistics Download counts Addons OSF Nudge the incentive system ■ Support pre-registration ■ Create alternative means of measuring impact ■ Enable sharing of materials Facilitate good research ■ Organize files ■ Allow collaboration In the future ■ Check lists ■ Notifications ■ Messaging ■ Dataverse, Evernote, Trello, Plot.ly.
    [Show full text]
  • Researchers Overturn Landmark Study on the Replicability of Psychological Science
    Researchers overturn landmark study on the replicability of psychological science By Peter Reuell Harvard Staff Writer Category: HarvardScience Subcategory: Culture & Society KEYWORDS: psychology, psychological science, replication, replicate, reproduce, reproducibility, Center for Open Science, Gilbert, Daniel Gilbert, King, Gary King, Science, Harvard, FAS, Faculty of Arts and Sciences, Reuell, Peter Reuell Summary: A 2015 study claiming that more than half of all psychology studies cannot be replicated turns out to be wrong. Harvard researchers have discovered that the study contains several statistical and methodological mistakes, and that when these are corrected, the study actually shows that the replication rate in psychology is quite high – indeed, it is statistically indistinguishable from 100%. RELATED LINKS: According to two Harvard professors and their collaborators, a 2015 landmark study showing that more than half of all psychology studies cannot be replicated is actually wrong. In an attempt to determine the replicability of psychological science, a consortium of 270 scientists known as The Open Science Collaboration (OSC) tried to replicate the results of 100 published studies. More than half of them failed, creating sensational headlines worldwide about the “replication crisis” in psychology. But an in-depth examination of the data by Daniel Gilbert (Edgar Pierce Professor of Psychology at Harvard University), Gary King (Albert J. Weatherhead III University Professor at Harvard University), Stephen Pettigrew (doctoral student in the Department of Government at Harvard University), and Timothy Wilson (Sherrell J. Aston Professor of Psychology at the University of Virginia) has revealed that the OSC made some serious mistakes that make this pessimistic conclusion completely unwarranted: The methods of many of the replication studies turn out to be remarkably different from the originals and, according to Gilbert, King, Pettigrew, and Wilson, these “infidelities” had two important consequences.
    [Show full text]
  • The Life Outcomes of Personality Replication (LOOPR) Project Was Therefore Conducted to Estimate the Replicability of the Personality-Outcome Literature
    PSSXXX10.1177/0956797619831612SotoReplicability of Trait–Outcome Associations 831612research-article2019 ASSOCIATION FOR Research Article PSYCHOLOGICAL SCIENCE Psychological Science 2019, Vol. 30(5) 711 –727 How Replicable Are Links Between © The Author(s) 2019 Article reuse guidelines: sagepub.com/journals-permissions Personality Traits and Consequential DOI:https://doi.org/10.1177/0956797619831612 10.1177/0956797619831612 Life Outcomes? The Life Outcomes of www.psychologicalscience.org/PS Personality Replication Project TC Christopher J. Soto Department of Psychology, Colby College Abstract The Big Five personality traits have been linked to dozens of life outcomes. However, metascientific research has raised questions about the replicability of behavioral science. The Life Outcomes of Personality Replication (LOOPR) Project was therefore conducted to estimate the replicability of the personality-outcome literature. Specifically, I conducted preregistered, high-powered (median N = 1,504) replications of 78 previously published trait–outcome associations. Overall, 87% of the replication attempts were statistically significant in the expected direction. The replication effects were typically 77% as strong as the corresponding original effects, which represents a significant decline in effect size. The replicability of individual effects was predicted by the effect size and design of the original study, as well as the sample size and statistical power of the replication. These results indicate that the personality-outcome literature provides a reasonably accurate map of trait–outcome associations but also that it stands to benefit from efforts to improve replicability. Keywords Big Five, life outcomes, metascience, personality traits, replication, open data, open materials, preregistered Received 7/6/18; Revision accepted 11/29/18 Do personality characteristics reliably predict conse- effort to estimate the replicability of the personality- quential life outcomes? A sizable research literature has outcome literature.
    [Show full text]
  • Replicability, Robustness, and Reproducibility in Psychological Science
    1 Replicability, Robustness, and Reproducibility in Psychological Science Brian A. Nosek Center for Open Science; University of Virginia Tom E. Hardwicke University of Amsterdam Hannah Moshontz University of Wisconsin-Madison Aurélien Allard University of California, Davis Katherine S. Corker Grand Valley State University Anna Dreber Stockholm School of Economics; University of Innsbruck Fiona Fidler University of Melbourne Joe Hilgard Illinois State University Melissa Kline Struhl Center for Open Science Michèle Nuijten Meta-Research Center; Tilburg University Julia Rohrer Leipzig University Felipe Romero University of Groningen Anne Scheel Eindhoven University of Technology Laura Scherer University of Colorado Denver - Anschutz Medical Campus Felix Schönbrodt Ludwig-Maximilians-Universität München, LMU Open Science Center Simine Vazire University of Melbourne In press at the Annual Review of Psychology Final version Completed: April 6, 2021 Authors’ note: B.A.N. and M.K.S. are employees of the nonprofit Center for Open Science that has a mission to increase openness, integrity, and reproducibility of research. K.S.C. is the unpaid executive officer of the nonprofit Society for the Improvement of Psychological Science. This work was supported by grants to B.A.N. from Arnold Ventures, John Templeton Foundation, Templeton World Charity Foundation, and Templeton Religion Trust. T.E.H. received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 841188. We thank Adam Gill for assistance creating Figures 1, 2, and S2. Data, materials, and code are available at https://osf.io/7np92/. B.A.N. Drafted the outline and manuscript sections, collaborated with section leads on conceptualizing and drafting their components, coded replication study outcomes for Figure 1, coded journal policies for Figure 5.
    [Show full text]
  • Open Science 1
    Running head: OPEN SCIENCE 1 Open Science Barbara A. Spellman, University of Virginia Elizabeth A. Gilbert, Katherine S. Corker, Grand Valley State University Draft of: 20 September 2017 OPEN SCIENCE 2 Abstract Open science is a collection of actions designed to make scientific processes more transparent and results more accessible. Its goal is to build a more replicable and robust science; it does so using new technologies, altering incentives, and changing attitudes. The current movement toward open science was spurred, in part, by a recent series of unfortunate events within psychology and other sciences. These events include the large number of studies that have failed to replicate and the prevalence of common research and publication procedures that could explain why. Many journals and funding agencies now encourage, require, or reward some open science practices, including pre-registration, providing full materials, posting data, distinguishing between exploratory and confirmatory analyses, and running replication studies. Individuals can practice and promote open science in their many roles as researchers, authors, reviewers, editors, teachers, and members of hiring, tenure, promotion, and awards committees. A plethora of resources are available to help scientists, and science, achieve these goals. Keywords: data sharing, file drawer problem, open access, open science, preregistration, questionable research practices, replication crisis, reproducibility, scientific integrity Thanks to Brent Donnellan (big thanks!), Daniël Lakens, Calvin Lai, Courtney Soderberg, and Simine Vazire OPEN SCIENCE 3 Open Science When we (the authors) look back a couple of years, to the earliest outline of this chapter, the open science movement within psychology seemed to be in its infancy.
    [Show full text]
  • 2015 Las Vegas, NV
    WESTERN PSYCHOLOGICAL ASSOCIATION APRIL 30-MAY 3, 2015 RED ROCK RESORT – LAS VEGAS, NEVADA PDF PROGRAM LISTING This program was updated on the date shown below. Additional changes changes may occur prior to the convention. Information on the Terman Teaching Conference on April 29 may be downloaded on the WPA website. A daily schedule for the Film Festival is now included in this version of the program. WPA will have an event app for use on your computer, tablet, and/or smart phone. The app includes a scheduling feature to allow you to plan sessions you wish to attend. There is also a tracking feature to find sessions of particular interest to you. The app is available: eventmobi.com/wpa2015. You do not need the app store to download. Date of release: March 29, 2015 Updated: April 27, 2015 Thursday 4 THURSDAY, APRIL 30 2015 WPA FILM FESTIVAL - THURSDAY 8:00 a.m. - 9:00 p.m. Veranda D Time Name of Film Running Time (in minutes) MORAL DEVELOPMENT 8:00 a.m. Born to be Good 51 BULLYING 9:00 The Boy Game 16 COUPLES, RELATIONSHIPS, & DIVORCE 9:15 Seeking Asian Female 53 10:15 Split: Divorce through Kids’ Eyes 28 ADOPTION 10:45 Somewhere Between 88 NEUROPSYCHOLOGY 12:15 p.m. Where am I? 44 1:00 Genetic Me 52 TRAUMA & POST-TRAUMATIC STRESS DISORDER 2:00 Homecoming: Conversations with Combat PTSD 29 2:30 When I Came Home 70 3:45 Land of Opportunity 97 ENCORE! ENCORE! ***WINNERS OF THE 2014 WPA FILM FESTIVAL*** 6:45 In the Shadow of the Sun 85 8:15 School's Out - Lessons from a Forest Kindergarten 36 Thursday 5 POSTER SESSION 1 8:00-9:15 RED ROCK BALLROOM ABC DEVELOPMENTAL PSYCHOLOGY 1 EDUCATION ISSUES 1 1–̵1 PARENTAL BOUNDARIES ON TODDLER TECHNOLOGY-USE IN THE HOME, Deanndra D Pimentel (Alaska Pacific University) 1–2 NEW BABYSITTERS: TECHNOLOGY USE IN RESTAURANTS BY GENDER OF CAREGIVER, Edwin O.
    [Show full text]
  • Community of Open Science Grassroots Networks (COSGN) Project Summary
    Center for Open Science NSF 21-511 AccelNet-Implementation: Community of Open Science Grassroots Networks (COSGN) Project Summary Overview. The Community of Open Scholarship Grassroots Networks (COSGN), includes 107 grassroots networks, representing virtually every region of the world and every research discipline. These networks communicate and coordinate on topics of common interest. We propose, using an NSF 21-511 Implementation grant, to formalize governance and coordination of the networks to maximize impact and establish standard practices for sustainability. In the project period, we will increase the capacity of COSGN to advance the research and community goals of the participating networks individually and collectively, and establish governance, succession planning, shared resources, and communication pathways to ensure an active, community-sustained network of networks. By the end of the project period, we will have established a self-sustaining network of networks that leverages disciplinary and regional diversity, actively collaborates across networks for grassroots organizing, and shares resources for maximum impact on culture change for open scholarship. Intellectual Merit. The open scholarship community is fueled by recognition that the social structure and culture of research does not promote practices and reward behaviors in line with scholarly values. Networks promoting open scholarship represent a variety of aims, including: increasing the transparency and accessibility of research processes, content, and outputs; improving the rigor and reproducibility of research practices; and advancing inclusivity of who can contribute to scholarship and how to diversify reward systems to encourage their contributions. The challenges and opportunities to improve research practices exist in every scholarly discipline, every region of the world, and every stakeholder group (e.g., researchers, institutions, publishers, funders, consumers of science).
    [Show full text]