1 Experimental Thinking

Total Page:16

File Type:pdf, Size:1020Kb

1 Experimental Thinking Experimental Thinking: A Primer on Social Science Experiments by James N. Druckman [email protected] Department of Political Science Northwestern University Scott Hall 601 University Place Evanston, IL 60208 Phone: 847-491-7450 1 To Marj and Dan Druckman 2 Table of Contents List of Figures List of Tables Preface Acknowledgements Chapter 1: Why a Primer on Social Science Experiments? Chapter 2: The Scientific Process and How to Think about Experiments Chapter 3: Evaluating Experiments: Realism, Validity, and Samples Chapter 4: Innovations in Experimental Designs: Opportunities and Limitations Chapter 5: What to Do Before, During, and After an Experiment Chapter 6: Designing “Good” Experiments References 3 List of Figures Figure 1-1: American Political Science Review Experimental Articles By Decade Figure 4-1: Audit Study Logic Figure 4-2: Pager’s Results Figure 4-3: Washing Machined Profile 1 Figure 4-4: Washing Machined Profile 2 Figure 4-5: Immigrant Profile Figure 4-6: Immigrant Conjoint Results 4 List of Tables Table 2-1: Measurement Validity Concepts Table 2-2: Assumptions Underlying Solutions to the Fundamental Problem of Causal Inference Table 2-3: Experimental Approaches Table 3-1: Types of Validity Table 3-2: Generalization Questions Table 4-1: Using Experiments to Study Policy Table 5-1: ASK: Examples of How to Ask Research Questions Table 5-2: Freese and Peterson’s (2017) Forms of “Replication” 5 Preface Experiments are a central methodology in the social sciences. Scholars from every discipline regularly turn to experiments. Practitioners rely on experimental evidence in evaluating social programs, policies, institutions, and information provision. The last decade has seen a fundamental shift in experimental social science thanks not only to their emergence as a primary methodology in many disciplines, but also to technological advances and evolving sociological norms (e.g., open science). This book is about how to “think” about experiments in light of these changes. It argues that designing a good experiment is a slow moving process (given the host of considerations) which is counter to the current fast moving temptations available in the social sciences. The book includes discussion of the place of experiments in the social science process, the assumptions underlying different types of experiments, the validity of experiments, the application of different designs (such as audit field experiments and conjoint survey experiments), how to arrive at experimental questions, the role of replications in experimental research, and the steps involved in designing and conducting “good” experiments. The goal is to ensure social science research remains driven by important substantive questions and fully exploits the potential of experiments in a thoughtful manner. 6 Acknowledgements In some ways, I have been writing this book for much of my life. At an early age, my mom, Marj Druckman, taught me how to systematically address problems and think about them from multiple angles. Meanwhile, my dad, Dan Druckman, exposed me to the social sciences and taught me how to think about experiments. He even invited me to work with him on my first experiment. I thank them both and dedicate this book to them. I also thank the many students, teachers, and colleagues who have shaped my thinking on experiments. Dozens of students have challenged me to think of experiments in new ways; they also have shown me how to creatively use experiments to address a wide range of questions. Several current and former students assisted in the writing of this book including Robin Bayes, Adam Howat, Maya Novak-herzog, Richard Shafranek, Andrew Thompson, and Anna Wang. Jake Rothschild insightfully commented on the entire book and I am extremely appreciative. I also thank my early mentors – particularly Ken Janda, Skip Lupia, and Mat McCubbins – for teaching me about social science methods and experiments. Mat read the first chapter and in his typical fashion, asked “what’s your point?”, thereby forcing me to make it clearer. I am indebted to several colleagues who have talked with me about experiments and/or encouraged me to flesh out my thoughts over the years. A partial list includes Adam Berinsky, Toby Bolsen, Tabitha Bonilla, Cheryl Boudreau, John Bullock, Ethan Busby, Dennis Chong, James Chu, Fay Lomax Cook, Charles Crabtree, Catherine Eckel, David Figlio, Eli Finkel, D.J. Flynn, Jeremy Freese, Laurel Harbridge-Yong, Larry Hedges, Lenka Hrbková, Shanto Iyengar, Cindy Kam, John Kane, Martin Kifer, Samara Klar, Jim Kuklinski, Thomas Leeper, Neil Malhotra, Leslie McCall, Rose McDermott, Mary McGrath, Luke Miratrix, Dan Molden, the late Becky Morton, Kevin Mullinix, Diana Mutz, Tom Nelson, Dan O’Keefe, Mike Parkin, Erik Peterson, Dave 7 Rand, Jenn Richeson, Josh Robison, John Barry Ryan, Monica Schneider, Libby Sharrow, the late Lee Sigelman, Nick Stagnaro, Ben Tappin, Alex Theodoridis, Paul Sniderman, Sophie Trawalter, Lynn Vavreck, Jan Voelkel, Robb Willer, and Teppei Yamamoto. I am particularly indebted to colleagues who read and commented on the book. Adam Levine did so, sending along nearly ten pages of profound comments. This led to substantial revisions which Yanna Krupnikov, Matt Levendusky, and Rune Slothuus were kind enough to read – offering extreme insight and encouragement. This is nothing new as these colleagues have been vital sounding boards for years. The same is true of Don Green who played a crucial role in supporting this endeavor. I thank Robert Dreesen from Cambridge who guided me at every stage and the anonymous reviewers whose comments pressed me to think more deeply about many of the topics covered in the book. I am especially appreciative of Yoshi Ono who arranged for me to give several presentations on experimentation in Japan during the summer of 2019. He was a superb host and is a great colleague. I also thank the participants who attended a conference on advances in experimental political science at Northwestern University in May, 2019. Their presentations and conversations (and the edited volume, co-edited by myself and Don Green, to which they contributed) significantly shaped my thinking. I thank the National Science Foundation (SES-1822286), the Ford Motor Company Center for Global Citizenship at the Kellogg School of Management at Northwestern University (directed by David Austen-Smith), the Institute for Policy Research at Northwestern University, and the Department of Political Science at Northwestern University for supporting the conference. All of these experiences formed the basis for this book. Finally, my family Nikki, Jake, and Sam have been a constant source of support in all the best ways. 8 Chapter 1: Why a Primer on Social Science Experiments? In the 1909 American Political Science Association’s presidential address, A. Lawrence Lowell stated “We are limited by the impossibility of experiment. Politics is an observational, not an experimental, science…” (Lowell 1910: 7). One hundred and ten years later, the Association’s president, Rogers Smith (2020: 15), raised the question of “whether an excessive emphasis on experiments will unduly constrict the questions political scientists ask…” Clearly, much has changed in political science.1 The same can be said about many social science disciplines where experiments have evolved from a non-existent method to an accepted method to a primary method. Even psychology – where experiments have forever been a central approach – has experienced substantial changes in the last decade due to shifts in the social sciences. Specifically, massive technological advances have facilitated data access and analysis, 1 While not my focus, another contrast between Lowell and Smith concerns their perspectives on race. Lowell served as the President of Harvard from 1909 to 1933, during which time he attempted to ban African-American students from living in freshman halls (Sollors et al. 1993). In contrast, a fair amount of Smith’s work explores the incorporation of minorities into political life, such as pointing out the United States has an ascriptive tradition that involves sexism, racism, and nativism (Smith 1993). He states in his presidential address, “we are not going to be able to understand major political developments of the past, present, and future if we do not explore more deeply the politics of identity formation, using all methods that can help” (12). This comparison reveals the extent to which the discipline has changed from one that largely ignored race for the first part of its history (Blatt 2018) to one that is now recognizing the central role of studying race and identity. 9 and social scientists from all disciplines have called for more “open science” practices that involve transparency and replication. There is a concern, however, that these changes may cause experimentalists to become “methods driven,” neither asking appropriate questions nor maximizing the potential of the method (e.g., Thelen and Mahoney 2015: 19). These apprehensions accentuate the need for careful discussion of the experimental method. That is the goal of this book: the hope is to provide readers with a way to “think” about experiments, both as users and consumers. I do this from the perspective of a political scientist and thus I discuss the evolution of experiments in political science and use many examples from political science. That said, the arguments I make and the suggestions I offer apply to any social science application of an experiment – which as will be discussed in detail – I define as a study where an intervention (by a researcher or a natural event) provides the primary mechanism by which one attempts to make a casual claim. What follows can be read by those with no background and/or interest in political science. To be clear, this manuscript is not a vigorous defense of experiments, although it will become apparent that experiments have far-reaching applications. Further, the book is neither a textbook on experimental design and analyses – many such treatments exist – nor an advanced discussion of new developments, which is available in Druckman and Green’s (2021) edited volume.
Recommended publications
  • A Gentle Introduction
    01-Coolidge-4857.qxd 1/2/2006 5:10 PM Page 1 1 A Gentle Introduction Chapter 1 Goals • Understand the general purposes of statistics • Understand the various roles of a statistician • Learn the differences and similarities between descriptive and inferential statistics • Understand how the discipline of statistics allows the creation of principled arguments • Learn eight essential questions of any survey or study • Understand the differences between experimental control and statistical control • Develop an appreciation for good and bad experimental designs • Learn four common types of measurement scales • Learn to recognize and use common statistical symbols he words DON’T PANIC appear on the cover of the book A T Hitchhiker’s Guide to the Galaxy. Perhaps it may be appropriate for some of you to invoke these words as you embark on a course in statistics. However, be assured that this course and materials are prepared with a dif- ferent philosophy in mind, which is that statistics should help you under- stand the world around you and help you make better informed decisions. Unfortunately, statistics may be legendary for driving sane people mad or, less dramatically, causing undue anxiety in the hearts of countless college students. However, statistics is simply the science of the organization and conceptual understanding of groups of numbers. By converting our world to numbers, statistics helps us to understand our world in a quick and efficient 1 01-Coolidge-4857.qxd 1/2/2006 5:10 PM Page 2 2 STATISTICS: A GENTLE INTRODUCTION way. It also helps us to make conceptual sense so that we might be able to communicate this information about our world to others.
    [Show full text]
  • Statistics Tutorial
    Published on Explorable.com (https://explorable.com) Home > Statistics Statistics Tutorial Oskar Blakstad561.8K reads This statistics tutorial is a guide to help you understand key concepts of statistics and how these concepts relate to the scientific method and research. Scientists frequently use statistics to analyze their results. Why do researchers use statistics? Statistics can help understand a phenomenon by confirming or rejecting a hypothesis. It is vital to how we acquire knowledge to most scientific theories. You don't need to be a scientist though; anyone wanting to learn about how researchers can get help from statistics may want to read this statistics tutorial for the scientific method. What is Statistics? Research Data This section of the statistics tutorial is about understanding how data is acquired and used. The results of a science investigation often contain much more data or information than the researcher needs. This data-material, or information, is called raw data. To be able to analyze the data sensibly, the raw data is processed [1] into "output data [2]". There are many methods to process the data, but basically the scientist organizes and summarizes the raw data into a more sensible chunk of data. Any type of organized information may be called a "data set [3]". Then, researchers may apply different statistical methods to analyze and understand the data better (and more accurately). Depending on the research, the scientist may also want to use statistics descriptively [4] or for exploratory research. What is great about raw data is that you can go back and check things if you suspect something different is going on than you originally thought.
    [Show full text]
  • Nov P12-23 Nov Fea 10.17
    Tarnish on the ‘Gold Standard:’ Understanding Recent Problems in Forensic DNA Testing • In Virginia, post-conviction DNA testing in the high-pro- file case of Earl Washington, Jr. (who was falsely convicted of capital murder and came within hours of execution) contradicted DNA tests on the same samples performed earlier by the State Division of Forensic Sciences. An out- side investigation concluded that the state lab had botched the analysis of the case, failing to follow proper procedures and misinterpreting its own test results. The outside inves- tigators called for, and the governor ordered, a broader investigation of the lab to determine whether these prob- lems are endemic. Problematic test procedures and mis- leading testimony have also come to light in two addition- al capital cases handled by the state lab. 2 • In 2004, an investigation by the Seattle Post-Intelligencer documented 23 DNA testing errors in serious criminal cases evidence has long been called “the gold standard” handled by the Washington State Patrol laboratory.3 DNAof forensic science. Most people believe it is virtu- ally infallible — that it either produces the right result or no • In North Carolina, the Winston-Salem Journal recently result. But this belief is difficult to square with recent news published a series of articles documenting numerous DNA stories about errors in DNA testing. An testing errors by the North Carolina extraordinary number of problems related State Bureau of Investigation.4 to forensic DNA evidence have recently come to light. Consider,
    [Show full text]
  • Control in an Experiment Example
    Control In An Experiment Example Is Jens ungrudging when Fleming throbbings industriously? Rustred Cleveland float efficaciously. Is Benjy ragged or unharvested after polemoniaceous Whitby doggings so originally? For example small we use statistical methods to mate if an observed difference between color and experimental groups is often random. The Simple Experiment Two-group Design Protocol JoVE. If controls is in control? Enter an improvement. Actinidia chinensis was more audience in orchards with old trees than warm with young trees. For example you confirm your experiment even if older gay men. Example Practical 14 Investigating the effect of temperature on the summer of an. A Refresher on Randomized Controlled Experiments. The control in an already sent. In a controlled experiment the only variable that should evoke different entity the two. That saying, when aircraft hit a drumstick on a curve, it makes a sound. For booze, in biological responses it is honest for the variability to year with the size of proper response. After several produce more efficient way to compare treatments of each participant experiences all. Variables Online Statistics Book. What although the difference between a positive and a negative control group. Scientists compare several experiments? Using a control group means keep any change took the dependent variable can be attributed to the independent variable. One control in an experimental controls report results have no. Experimental group to be safer to an experiment example you compare the most salient effects of a concurrent and to the effect on sunflower size of a light. The control in an artificial situation. Those experiments controlled experiment! We in an example, controlling these examples are in cases, to chocolate cake with two groups experience an experiment to be cautious that varying levels.
    [Show full text]
  • 25 Years of Complex Intervention Trials: Reflections on Lived and Scientific Experiences Kelli Komro, Emory University
    25 Years of Complex Intervention Trials: Reflections on Lived and Scientific Experiences Kelli Komro, Emory University Journal Title: Research on Social Work Practice Volume: Volume 28, Number 5 Publisher: SAGE Publications (UK and US) | 2018-07-01, Pages 523-531 Type of Work: Article | Post-print: After Peer Review Publisher DOI: 10.1177/1049731517718939 Permanent URL: https://pid.emory.edu/ark:/25593/t0ktw Final published version: http://dx.doi.org/10.1177/1049731517718939 Copyright information: © 2017, The Author(s) 2017. Accessed October 3, 2021 9:20 AM EDT HHS Public Access Author manuscript Author ManuscriptAuthor Manuscript Author Res Soc Manuscript Author Work Pract. Author Manuscript Author manuscript; available in PMC 2018 June 28. Published in final edited form as: Res Soc Work Pract. 2018 ; 28(5): 523–531. doi:10.1177/1049731517718939. 25 Years of Complex Intervention Trials: Reflections on Lived and Scientific Experiences Kelli A. Komro1,2 1Department of Behavioral Sciences and Health Education, Rollins School of Public Health, Emory University, Atlanta, GA, USA 2Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA, USA Abstract For the past 25 years, I have led multiple group-randomized trials, each focused on a specific underserved population of youth and each one evaluated health effects of complex interventions designed to prevent high-risk behaviors. I share my reflections on issues of intervention and research design, as well as how research results fostered my evolution toward addressing fundamental social determinants of health and well-being. Reflections related to intervention design emphasize the importance of careful consideration of theory of causes and theory of change, theoretical comprehensiveness versus fundamental determinants of population health, how high to reach, and health in all policies.
    [Show full text]
  • Annual Report of the Director Bureau of Standards to the Secretary Of
    ANNUAL REPORT OF THE DIRECTOR UREAU OF STANDARDS TO THE SECRETARY OF COMMERCE FOR THE FISCAL YEAR ENDED JUNE 30, 1921 (Mijcellaneous Publications—No, 47) WASHINGTON GOVERNMENT PRINTING OFFICE 1921 ANNUAL REPORT OF THE DIRECTOR BUREAU OF STANDARDS TO THE SECRETARY OF COMMERCE FOR THE FISCAL YEAR ENDED JUNE 30, 1921 (Miscellaneous Publications—No. 47) WASHINGTON GOVERNMENT PRINTING OFFICE 1921 Persons on a regular mailing list of the Department of Commerce should give prompt notice to the " Division of Publications, Department of Commerce, Washington, D. C," of any change of address, stating specifically the form of address in which the publication has previously been received, as well as the new address. The Department should also be advised promptly when a publi- cation is no longer desired. 2 CONTENTS Chart showing functions of Bureau facing 17 I. Functions, organization, and location 17 1. Definition of standards 17 Standards of measurement 17 Pliysical constants (standard values) IS Standards of quality . 18 Standards of performance 19 Standards of pi'actice 20 2. Relation of the Bureau's work to the public 20 Comparison of standards of scientific and educational institu- tions or of the public with those of the Bureau 20 Work of the Bureau in an advisory capacity 21 3. Relation of the Bureau's work to the industries 21 Assistance in establishing exact standards of measurement needed in industries 21 The collection of fundamental data for the industries 22 Simplification of standards 22 Elimination of industrial wastes 23 Training of experts in various industrial fields 23 4. Relation of the Bureau's work to the Government 24 Comparison of standards of other Government departments with those of the Bureau 24 Performance of tests and investigations and the collection of scientific data of a fundamental nature 24 Advisory and consulting capacity 24 The Bureau as a testing laboratory and its work in the prepara- tion of specifications on which to base the purchase of ma- terials 25 5.
    [Show full text]
  • Ev24n1p53.Pdf (687.3Kb)
    Epidemiology of Drug Abuse in the United States: A Summary of Methods and Findings NICHOLAS J. KOZEL~ 444 Epidemiology has recently been used to effectively track and analyze drug abuse patterns. This article generally desm’bes methods used in the United States for estimating and monitoring drug abuse. It outlines the advantages and limitafions of suck data sources as surveys, indicators, and etknograpky, and briefly explores the work and utility of local, national, and international drug surveillance net- works. In addition, it desrribes national and local patterns of heroin, cocaine, and marijuana abuse. he recent use of epidemiology to METHODS FOR ESTIMATING AND T study drug abuse has been of great MONITORING DRUG ABUSE value. Although drug abuse, because of its illicit nature and because of the unique The fact that nor-medical use of licit behavior patterns of abusers, has proven drugs and the abuse of illicit drugs is pro- very difficult to measure, the epidemio- scribed in society poses major difficulties logic approach has provided tools to ac- and limitations for most methodologies curately quantify data and to calculate used to measure the nature and extent of with scientific rigor such things as inci- drug abuse. However, there exist several dence and prevalence rates. These rates, data sources, each with its own unique in turn, can provide insight into the con- perspective, from which drug abuse ditions which place people at risk for trends can be tracked. The main data sys- drug abuse and into the etiology and con- tems used in the United States are sur- sequences of abuse.
    [Show full text]
  • Quasi Experimental Designs • Topic Three: Single N Designs
    Lecturer: Dr. Adote Anum, Dept. of Psychology Contact Information: [email protected] College of Education School of Continuing and Distance Education 2014/2015 – 2016/2017 Session Overview • In this Session we will cover three more designs and these are complex designs, quasi designs, and single N designs. • In Complex designs, the researcher studies the effect of at least two independent variables simultaneously and examines the main (independent) and interaction (varying effect of one over the other) on the dependent variable. • In Quasi design, you will learn about experiments in which there cannot be random assignment. • In Single N designs, you will learn about use of single participants or single groups in experiments Session Outline The key topics to be covered in the session are as follows: • Topic One: Complex Experimental Designs • Topic Two: Quasi Experimental Designs • Topic Three: Single N Designs Reading List • Cozby, P. C. (2004). Methods in behavioral research (8th Ed.). Mayfield Pub. Co. CA. • http://open.lib.umn.edu/psychologyresearchmethods/ (Chapter 6, pages 134–140; Chapter 8, pages 150-158; Chapter 10, pages 184-197). Please refer to Sakai for the PDF version of this textbook. Topic One COMPLEX EXPERIMENTAL DESIGNS Complex/Factorial Designs • More than one IV • More efficient than single IV experiments • Gives more information – Allows analysis of main effects and interactions. – Simplest Factorial Design 2 x 2 factorial design Has two independent variables Complex Designs - Terminology • An IV is called a factor – number of numbers = how many IVs there are – values of numbers = how many levels each IV has • Examples: – “2 X 2 design” (two IVs, each with 2 levels) – “2 X 3 design” (first IV has 2 levels, second IV has 3 levels) – “2 X 8 design” (first IV has 2 levels, second IV has 8 levels) – “2 X 2 X 4 design” (first IV has 2 levels, second IV has 2 levels, third IV has 4 levels).
    [Show full text]
  • ECRN Welcome to the NETT Neurological Emergencies
    Welcome to the Neurological Emergency Treatment Trials Network Overview of the network William Barsan, MD nett.umich.edu Mission The mission of the Neurological Emergencies Treatment Trials (NETT) Network is to improve outcomes of patients with acute neurological problems through innovative research focused on the emergent phase of patient care. Vision NETT will engage clinicians and providers at the front lines of emergency care to conduct large, simple multi-center clinical trials to answer research questions of clinical importance. The NETT structure will be utilized to achieve economies of scale enabling cost effective, high quality research. Overall Network Concept • Phase 3 clinical trials only • Hub and Spoke Network • Separate awards for: – CCC – SDMC – Hubs • NINDS funding only for infrastructure • Specific study funding must be secured • NETT is an “open” network Design for the future Large simple trial designs •Streamlined protocols •Collect only essential data (short case report forms) •High enrollment – lower per-patient costs Design for the future Emphasis on intervention •Focus on phase III intervention trials •Patient-oriented readily-applicable results •Diverse enrollment (patients & practice environments) Design for the future Consent issues •Exception to informed consent for emergency research •Optimize methods that respect human subjects •Dedicate network resources to facilitate local efforts •Help develop centralized IRB review Guidelines for NETT Trials – Study an emergent condition best conducted in the emergency care setting with the primary intervention being delivered in the prehospital or emergency phase of treatment. – The study should have a patient oriented primary outcome. – The study should be “simple” in design with clearly defined endpoints and gather only essential data to answer the scientific question.
    [Show full text]
  • Download the Full Thesis As A
    Control and the therapeutic trial 1918 – 1948 Martin Varnam Edwards Thesis submitted for the degree of Doctor of Medicine, University of London 2004 1 Abstract: Control and the therapeutic trial 1918 – 1948 This thesis examines the rhetoric employed in the debate over the proper means to assess therapeutic efficacy during the first half of the twentieth century. I explore a number of therapeutic investigations including the debate in 1925 over the consumption of raw pancreas as a treatment for diabetes, the MRC’s investigation of ultraviolet light therapy in 1928, the MRC trial of serum therapy in pneumonia during the early 1930s, and the MRC’s wartime influenza vaccination trials during the 1940s. The MRC’s streptomycin trial, published in 1948, provided a model for future work in studies of therapeutic efficacy. Prior to this, therapeutic trials were idiosyncratic affairs and the methodology of a proper clinical trial was neither unified nor codified. But the MRC nevertheless referred consistently to its own trials as ‘controlled’ throughout the first half of the twentieth century. The term ‘controlled’ encompassed an eclectic assortment of methodologies, and conveyed a number of different meanings, sometimes simultaneously. Much of my thesis is devoted to exploring these various meanings and the contexts in which they were used. The unifying characteristic which bound the divers uses of the term ‘controlled,’ was its use as a rhetorical device at a time when the authority to determine the proper means of testing a remedy was very much up for grabs. The MRC, by describing its own therapeutic experiments as ‘controlled,’ was exploiting the powerful rhetorical connotations of the term.
    [Show full text]
  • Teaching STEM with Sharks Sharkwreck Mystery
    Diving into STEM with Oceanic Research Group Secondary Unit: Teaching STEM with Sharks Sharkwreck Mystery: Teacher Vocabulary Resources Concepts: Scientific Inquiry, Hypothesis, Theory, Study design Grade Level: 9 Estimated Time: Afterschool: 20 hours Classroom: 9 hours Overview: Thirty miles off the coast of North Carolina there is a shipwreck on the bottom and swarming around the wreck are dozens of Sand Tiger sharks. They look menacing, but they seem to be very docile. What are they all doing there, in one spot? That’s what Jonathan wants to find out. Biologists think they are coming to mate, as evidenced by the shark teeth they find scattered on the wreck. But while observing the sharks, Jonathan can’t see them doing much of anything except swimming around. He and his crew try various ways to learn more about why these Sand Tiger sharks congregate at this wreck. What do they learn by diving with and recording these sharks both during the day and at night? Resources: Jonathan Bird’s Blue World - Webisode 7 - http://www.blueworldtv.com/webisodes/watch/a- sharkwreck-mystery Introduction Objectives People are able to study land animals such as bears, • Introduce viewers to the Sand Tiger shark, and its lions or birds by sitting for many hours and observing specialized teeth designed for catching fish. them. In order to not affect the natural behavior of the animals, this observation usually occurs from a • Teach how a hypothesis can be tested by distance or by people camouflaging themselves in experimentation. the surrounding environment. Studying behavior of • Demonstrate just how little we know about some animals underwater is much more challenging due to species and how hard it is to learn new things about the limitations of time, underwater visibility, location wildlife.
    [Show full text]
  • Guidelines for Sensory Analysis Food Product Development and Quality
    Guidelines for Sensory Analysis In• Food Product Development and Quality Control Second Edition Roland P. Carpenter Sensory Computing Specialist Unilever Research Colworth United Kingdom David H. Lyon Head of Consumer and Sensory Sciences Campden & Chorleywood Food Research Association United Kingdom Terry A. Hasdell Scientific Services Manager United Biscuits (U.K.) Ltd United Kingdom AN ASPEN PUBLICATION® Aspen Publishers, Inc. Gaithersburg, Maryland 2000 Library of Congress Cataloging-In-Publication Data Guidelines for sensory analysis in food product development and quality controV[edited by] Roland P. Carpenter, David H. Lyon, Terry A. Hasdell.-2nd ed. P. em. Includes bibliographical references (p. ). ISBN 0-8342-1642-6 I. Food industry and trade--Quality control. 2. Food-Sensory evaluation. I. Carpenter, Roland P. ll. Lyon, David H., 1956- III. Hasdell, Terry A. TP372.5.G85 1999 664'.07-dc21 99-27341 CIP Copyright ©2000 by Campden & Chorleywood Food Research Association (CCFRA). Aspen Publishers, Inc., A Wolters Kluwer Company www.aSllenvublishel]£Q!ll All rights reserved. Aspen Publishers, Inc., grants permission for photocopying for personal or internal use. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. For information, address Aspen Publishers, Inc., Permissions Department, 200 Orchard Ridge Drive, Suite 200, Gaithersburg, Maryland 20878. Orders: (800) 638-8437 Customer Service: (800) 234-1660 About Aspen Publishers· For more than 40 years, Aspen has been a leading professional publisher in a variety of disciplines. Aspen's vast information resources are available in both print and electronic formats.
    [Show full text]