Surveys and Questionnaires

Total Page:16

File Type:pdf, Size:1020Kb

Surveys and Questionnaires The NIHR Research Design Service for Yorkshire & the Humber Surveys and Questionnaires Authors Nigel Mathers Nick Fox Amanda Hunn This Resource Pack is one of a series produced by The NIHR RDS for the East Midlands / The NIHR RDS for Yorkshire and the Humber. This series has been funded by The NIHR RDS EM / YH. This Resource Pack may be freely photocopied and distributed for the benefit of researchers. However it is the copyright of The NIHR RDS EM / YH and the authors and as such, no part of the content may be altered without the prior permission in writing, of the Copyright owner. Reference as: Mathers N, Fox N. and Hunn A. Surveys and Questionnaires. The NIHR RDS for the East Midlands / Yorkshire & the Humber, 2007. Nigel Mathers Academic Unit of Primary Medical Care Community Sciences Centre Northern General Hospital Herries Road Sheffield S5 7AU Nick Fox School of Health and Related Research (ScHARR) University of Sheffield Regent Court, 30 Regent Street Sheffield S1 4DA Amanda Hunn Left COREC at the end of December 2006 Last updated: 2009 The NIHR RDS for the East www.rds-eastmidlands.nihr.ac.uk Midlands Division of Primary Care, Leicester: [email protected] 14th Floor, Tower building University of Nottingham Nottingham: [email protected] University Park Nottingham NG7 2RD Tel: 0115 823 0500 The NIHR RDS for Yorkshire www.rds-yh.nihr.ac.uk & the Humber ScHARR Sheffield: [email protected] The University of Sheffield Leeds: [email protected] Regent Court York: [email protected] 30 Regent Street Sheffield S1 4DA Tel: 0114 222 0828 © Copyright of The NIHR RDS EM / YH (2009) The NIHR RDS for the East Midlands / Yorkshire & the Humber 2009 2 SURVEYS AND QUESTIONNAIRES Table of Contents Page 1. Introduction …………………………………………………………. 4 2. What is a survey? ………..………………………………………… 5 3. Methods of collecting survey data ……………………………… 8 4. Sampling for surveys ……………………………………………… 11 5. Questionnaire design ……………………………………………… 20 6. Using questionnaires in postal surveys ……………………….. 33 7. Data analysis ………………………………………………………... 36 Summary ……………………………………………………………….. 38 Answers to exercises ………………………………………………… 39 Further reading and resources ……………………………………... 42 Glossary ………………………………………………………………… 44 Appendix 1 The Fog Index …………………………………………… 51 NIHR RDS for the East Midlands / Yorkshire & Humber 2009 3 SURVEYS AND QUESTIONNAIRES 1. Introduction The survey is probably the most commonly used research design in health services research and the social sciences. We have all been asked to take part in a survey at some time. As consumers we are asked about our shopping habits, as users of services we are asked for our opinions of services. The survey is a flexible research approach used to investigate a wide range of topics. Surveys often employ the questionnaire as a tool for data collection. This resource pack considers the use of surveys and questionnaires in health and social care research. LEARNING OBJECTIVES Having successfully completed the work in this chapter, you will be able to: 1. Understand why you might want to use a survey. 2. Describe how to select a sample for a survey. 3. Understand why you might want to use a questionnaire. 4. Understand how the method used for data collection influences the design of the questionnaire. 5. Distinguish between a structured questionnaire, semi-structured questionnaire and a topic guide. 6. Design your own questionnaire and coding frame. 7. Distinguish between open-ended and closed questions. 8. List possible ways of increasing your response rate. The NIHR RDS for the East Midlands / Yorkshire & the Humber 2009 4 SURVEYS AND QUESTIONNAIRES 2. What is a survey? Surveys are a very traditional way of conducting research. They are particularly useful for non- experimental descriptive designs that seek to describe reality. So, for instance, a survey approach may be used to establish the prevalence or incidence of a particular condition. Likewise, the survey approach is frequently used to collect information on attitudes and behaviour. Some issues are best addressed by classical experimental design where participants are randomised to either an intervention group or a control group. In the real world it is not always a very practical design. There may be good reasons, either ethical or practical, why participants cannot be randomly assigned to a particular intervention. It may also be impossible to identify a control group. Control over the randomisation process can also be difficult to achieve. Surveys can take many forms. A survey of the entire population would be known as a census. However usually surveys are restricted to a representative sample of the potential group that the researcher is interested in, for reasons of practicality and cost-effectiveness. Most surveys take one of the following forms: Cross-sectional Surveys Surveys that are carried out at a just one point in time are known as a cross-sectional in design. They provide us with a snapshot of what is happening in that group at that particular time. They usually take a descriptive or exploratory form that simply sets out to describe behaviour or attitudes. So for example, if you want to measure some aspect of client satisfaction, then a cross-sectional descriptive survey would be the recommended approach. Likewise, if you wish to establish the prevalence of depression amongst new mothers, a postal survey might be an appropriate approach. Longitudinal Surveys Alternatively surveys can be longitudinal. A longitudinal survey rather than taking a snap-shot, paints a picture of events or attitudes over time. This may be a matter of months or years. There may be only two discrete surveys or there may be many repeated waves over a long period of time. Longitudinal surveys usually take one of two forms: · cohort surveys - which follow the same group of individuals over time, or · trend surveys - which take repeated samples of different people each time but always use the same core questions. Cohort studies are particularly useful in tracking the progress of particular conditions over time, whereas trend studies set out to measure trends in public opinion and behaviour over time. For instance, take the client satisfaction survey which was mentioned earlier. If we wanted to compare levels of client satisfaction year on year, then a longitudinal trend survey would be recommended. With a trend study it is not necessary to interview the same individuals each time. In fact it is probably better to deliberately avoid the same people since the very fact of participating in a survey can raise levels of awareness and change behaviour. This is particularly true if you are trying to measure awareness of a promotion campaign. A particularly well known version of a trend survey is the General Household Survey which is carried out on an annual basis. A cohort study is more difficult to carry out than a trend survey because the same individuals have to be traced over time and inevitably some participants move house, some fall ill and die, and some just refuse to participate. This loss of participants is known as ‘attrition’. Sample size calculations are even more important for cohort surveys because high levels of attrition can result in too small a sample in The NIHR RDS for the East Midlands / Yorkshire & the Humber 2009 5 SURVEYS AND QUESTIONNAIRES the final stages of the survey. Ideally expected levels of attrition should be calculated and allowed for in the initial sample selection. This means that the early stages of the cohort may be unnecessarily large but in turn this means that you will have adequate numbers in the final wave. A fine example of a cohort study is the National Child Development Study, which is based on an initial sample of children born in one week in 1947 and continued to follow them through over many years. Explanatory or Correlational Surveys We have described how some surveys seek only to describe events and attitudes. It is also possible for surveys to take an explanatory or correlational approach. This means that by using survey data the researcher would try to explore causal relationships between two or more variables. Demonstrating a causal relationship using survey data will always be more difficult than using an experimental design. Nevertheless there will always be situations in which an experimental design is just not possible. Using a longitudinal approach may also help in trying to identify a causal relationship. Statistical tests can be used to show statistically significant differences between groups in a survey. Confounding variables can also be controlled for in the data analysis. 2.1 What are the advantages of using a survey? · Surveys have internal and external validity - A survey which is based on some form of random sampling technique will produce a sample which is representative of the particular population under study and will produce findings which may be generalised to the wider population. Randomised clinical trials (RCTs) on the other hand often have very stringent inclusion and exclusion criteria which make generalisations of the findings very difficult to apply in the real world. · Surveys are efficient - Because surveys can use a random sampling technique to recruit participants, relatively small sample sizes can be used to generate findings which can be used to draw conclusions about the whole population. They are thus a very cost-effective way of finding out what people do, think and want. · Surveys can cover geographically spread samples - Surveys can be undertaken using a wide range of techniques including postal questionnaires and telephone interviews. This means that participants who are widely dispersed can be accessed and included in the sample. · Surveys may have ethical advantages - Since most surveys do not expose individuals to possibly invasive techniques or withhold treatment, they may be considered more ethical, since the individuals included in a study will merely be exposed to events that occur in the real world and would have taken place anyway.
Recommended publications
  • DATA COLLECTION METHODS Section III 5
    AN OVERVIEW OF QUANTITATIVE AND QUALITATIVE DATA COLLECTION METHODS Section III 5. DATA COLLECTION METHODS: SOME TIPS AND COMPARISONS In the previous chapter, we identified two broad types of evaluation methodologies: quantitative and qualitative. In this section, we talk more about the debate over the relative virtues of these approaches and discuss some of the advantages and disadvantages of different types of instruments. In such a debate, two types of issues are considered: theoretical and practical. Theoretical Issues Most often these center on one of three topics: · The value of the types of data · The relative scientific rigor of the data · Basic, underlying philosophies of evaluation Value of the Data Quantitative and qualitative techniques provide a tradeoff between breadth and depth, and between generalizability and targeting to specific (sometimes very limited) populations. For example, a quantitative data collection methodology such as a sample survey of high school students who participated in a special science enrichment program can yield representative and broadly generalizable information about the proportion of participants who plan to major in science when they get to college and how this proportion differs by gender. But at best, the survey can elicit only a few, often superficial reasons for this gender difference. On the other hand, separate focus groups (a qualitative technique related to a group interview) conducted with small groups of men and women students will provide many more clues about gender differences in the choice of science majors, and the extent to which the special science program changed or reinforced attitudes. The focus group technique is, however, limited in the extent to which findings apply beyond the specific individuals included in the groups.
    [Show full text]
  • Statistical Inference: How Reliable Is a Survey?
    Math 203, Fall 2008: Statistical Inference: How reliable is a survey? Consider a survey with a single question, to which respondents are asked to give an answer of yes or no. Suppose you pick a random sample of n people, and you find that the proportion that answered yes isp ˆ. Question: How close isp ˆ to the actual proportion p of people in the whole population who would have answered yes? In order for there to be a reliable answer to this question, the sample size, n, must be big enough so that the sample distribution is close to a bell shaped curve (i.e., close to a normal distribution). But even if n is big enough that the distribution is close to a normal distribution, usually you need to make n even bigger in order to make sure your margin of error is reasonably small. Thus the first thing to do is to be sure n is big enough for the sample distribution to be close to normal. The industry standard for being close enough is for n to be big enough so that 1 − p 1 − p n > 9 and n > 9 p p both hold. When p is about 50%, n can be as small as 10, but when p gets close to 0 or close to 1, the sample size n needs to get bigger. If p is 1% or 99%, then n must be at least 892, for example. (Note also that n here depends on p but not on the size of the whole population.) See Figures 1 and 2 showing frequency histograms for the number of yes respondents if p = 1% when the sample size n is 10 versus 1000 (this data was obtained by running a computer simulation taking 10000 samples).
    [Show full text]
  • On Becoming a Pragmatic Researcher: the Importance of Combining Quantitative and Qualitative Research Methodologies
    DOCUMENT RESUME ED 482 462 TM 035 389 AUTHOR Onwuegbuzie, Anthony J.; Leech, Nancy L. TITLE On Becoming a Pragmatic Researcher: The Importance of Combining Quantitative and Qualitative Research Methodologies. PUB DATE 2003-11-00 NOTE 25p.; Paper presented at the Annual Meeting of the Mid-South Educational Research Association (Biloxi, MS, November 5-7, 2003). PUB TYPE Reports Descriptive (141) Speeches/Meeting Papers (150) EDRS PRICE EDRS Price MF01/PCO2 Plus Postage. DESCRIPTORS *Pragmatics; *Qualitative Research; *Research Methodology; *Researchers ABSTRACT The last 100 years have witnessed a fervent debate in the United States about quantitative and qualitative research paradigms. Unfortunately, this has led to a great divide between quantitative and qualitative researchers, who often view themselves in competition with each other. Clearly, this polarization has promoted purists, i.e., researchers who restrict themselves exclusively to either quantitative or qualitative research methods. Mono-method research is the biggest threat to the advancement of the social sciences. As long as researchers stay polarized in research they cannot expect stakeholders who rely on their research findings to take their work seriously. The purpose of this paper is to explore how the debate between quantitative and qualitative is divisive, and thus counterproductive for advancing the social and behavioral science field. This paper advocates that all graduate students learn to use and appreciate both quantitative and qualitative research. In so doing, students will develop into what is termed "pragmatic researchers." (Contains 41 references.) (Author/SLD) Reproductions supplied by EDRS are the best that can be made from the original document. On Becoming a Pragmatic Researcher 1 Running head: ON BECOMING A PRAGMATIC RESEARCHER U.S.
    [Show full text]
  • Data Extraction for Complex Meta-Analysis (Decimal) Guide
    Pedder, H. , Sarri, G., Keeney, E., Nunes, V., & Dias, S. (2016). Data extraction for complex meta-analysis (DECiMAL) guide. Systematic Reviews, 5, [212]. https://doi.org/10.1186/s13643-016-0368-4 Publisher's PDF, also known as Version of record License (if available): CC BY Link to published version (if available): 10.1186/s13643-016-0368-4 Link to publication record in Explore Bristol Research PDF-document This is the final published version of the article (version of record). It first appeared online via BioMed Central at http://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0368-4. Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/ Pedder et al. Systematic Reviews (2016) 5:212 DOI 10.1186/s13643-016-0368-4 RESEARCH Open Access Data extraction for complex meta-analysis (DECiMAL) guide Hugo Pedder1*, Grammati Sarri2, Edna Keeney3, Vanessa Nunes1 and Sofia Dias3 Abstract As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis.
    [Show full text]
  • Those Missing Values in Questionnaires
    Those Missing Values in Questionnaires John R. Gerlach, Maxim Group, Plymouth Meeting, PA Cindy Garra, IMS HEALTH; Plymouth Meeting, PA Abstract alternative values are not bona fide SAS missing values; consequently, a SAS procedure, expectedly, will include Questionnaires are notorious for containing these alternative values, thereby producing bogus results. responses that later become missing values during processing and analyses. Contrary to a non-response that Typically, you can re-code the original data, so results in a typical, bona fide, missing value, questions that the missing values become ordinary missing values, might allow several alternative responses, necessarily thus allowing SAS to process only appropriate values. Of outside the usual range of appropriate responses. course, there's a loss of information since the original Whether a question represents continuous or categorical missing values did not constitute a non-response. Also, data, a good questionnaire offers meaningful alternatives, such pre-processing might include a myriad of IF THEN / such as: "Refused to Answer" and, of course, the ELSE statements, which can be very tedious and time- quintessential "Don't Know." Traditionally, these consuming to write, and difficult to maintain. Thus, given alternative responses have numeric values such as 97, a questionnaire that has over a hundred variables with 998, or 9999 and, therefore, pose problems when trying varying levels of missing values, the task of re-coding to distinguish them from normal responses, especially these variables becomes very time consuming at best. when multiple missing values exist. This paper discusses Even worse, the permanent re-coding of alternative missing values in SAS and techniques that facilitate the responses to ordinary missing numeric values in SAS process of dealing with multi-valued, meaningful missing precludes categorical analysis that requires the inclusion values often found in questionnaires.
    [Show full text]
  • 10 Questions Opinion Polls
    Questions you might have on 10opinion polls 1. What is an opinion poll? An opinion poll is a survey carried out to measure views on a certain topic within a specific group of people. For example, the topic may relate to who Kenyans support in the presidential race, in which case, the group of people interviewed will be registered voters. 2. How are interviewees for an opinion poll selected? The group of people interviewed for an opinion poll is called a sample. As the name suggests, a sample is a group of people that represents the total population whose opinion is being surveyed. In a scientific opinion poll, everyone has an equal chance of being interviewed. 3. So how come I have never been interviewed for an opinion poll? You have the same chance of being polled as anyone else living in Kenya. However, chances of this are very small and are estimated at about 1 in 14,000. This is because there are approximately 14 million registered voters in Kenya and, for practical and cost reasons, usually only between 1,000 and 2,000 people are interviewed for each survey carried out. 4. How can such a small group be representative of the entire population? In order to ensure that the sample/survey group is representative of the population, the surveyors must ensure that the group reflects the characteristics of the whole. For instance, to get a general idea of who might win the Kenyan presidential election, only the views of registered voters in Kenya will be surveyed as these are the people who will be able to influence the election.
    [Show full text]
  • SAMPLING DESIGN & WEIGHTING in the Original
    Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types.
    [Show full text]
  • Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System
    Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System Release year: 2006 Authors: G. Brancato, S. Macchia, M. Murgia, M. Signore, G. Simeoni - Italian National Institute of Statistics, ISTAT K. Blanke, T. Körner, A. Nimmergut - Federal Statistical Office Germany, FSO P. Lima, R. Paulino - National Statistical Institute of Portugal, INE J.H.P. Hoffmeyer-Zlotnik - German Center for Survey Research and Methodology, ZUMA Version 1 Acknowledgements We are grateful to the experts from the network countries who supported us in all relevant stages of the work: Anja Ahola, Dirkjan Beukenhorst, Trine Dale, Gustav Haraldsen. We also thank all colleagues from European and overseas NSIs who helped us in understanding the current practices and in the review of the draft version of the handbook. Executive summary Executive Summary Questionnaires constitute the basis of every survey-based statistical measurement. They are by far the most important measurement instruments statisticians use to grasp the phenomena to be measured. Errors due to an insufficient questionnaire can hardly be compensated at later stages of the data collection process. Therefore, having systematic questionnaire design and testing procedures in place is vital for data quality, particularly for a minimisation of the measurement error. Against this background, the Directors General of the members of the European Statistical System (ESS) stressed the importance of questionnaire design and testing in the European Statistics Code of Practice, endorsed in February 2005. Principle 8 of the Code states that “appropriate statistical procedures, implemented from data collection to data validation, must underpin quality statistics.” One of the indicators referring to this principle requires that “questionnaires are systematically tested prior to the data collection.” Taking the Code of Practice as a starting point, this Recommended Practice Manual aims at further specifying the requirements of the Code of Practice.
    [Show full text]
  • Introduction
    1 Introduction he field of public health has never been as widely known or popular as T in recent years. On a global scale, the spread of HIV/AIDS beginning in the 1980s gave public health enormous impetus and visibility. Much like infectious diseases from earlier eras, HIV/AIDS was deeply enmeshed in environmental and behavioral contexts. If left unaddressed, the disease promised to engulf large portions of the world’s population. Yet today’s most enduring and pervasive public health problems are far more mundane, e.g., poor sanitation and water quality, malnutrition, and the everyday violence of grinding poverty. The 20th-century reign of the germ theory of disease etiology, with its emphasis on curing over preven- tion and laboratories over communities, has been tempered by these real- ities and by the vast increase in chronic diseases such as hypertension, diabetes, and cancer. Similarly, the dominance of quantification, in which ever-more sophisticated measures and statistics are expected to capture the full range of human experience, has given way to a more nuanced and thoughtful matching of methods with the problem at hand as well as with the people and places experiencing it (Baum, 1996; Rapkin & Trickett, 2005). Enter Qualitative Methods A colleague once astutely remarked that virtually anyone can read and appreciate qualitative research—its narrative reporting style makes it appear easy to carry out. By comparison, a quantitative study relies on complicated 1 2 QUALITATIVE AND MIXED METHODS IN PUBLIC HEALTH statistical analyses that require prior knowledge to decode their meaning. Yet the appealing end product of a qualitative study represents the culmi- nation of intense involvement and intellectual labor.
    [Show full text]
  • Meta-Analysis of the Relationships Between Different Leadership Practices and Organizational, Teaming, Leader, and Employee Outcomes*
    Journal of International Education and Leadership Volume 8 Issue 2 Fall 2018 http://www.jielusa.org/ ISSN: 2161-7252 Meta-Analysis of the Relationships Between Different Leadership Practices and Organizational, Teaming, Leader, and Employee Outcomes* Carl J. Dunst Orelena Hawks Puckett Institute Mary Beth Bruder University of Connecticut Health Center Deborah W. Hamby, Robin Howse, and Helen Wilkie Orelena Hawks Puckett Institute * This study was supported, in part, by funding from the U.S. Department of Education, Office of Special Education Programs (No. 325B120004) for the Early Childhood Personnel Center, University of Connecticut Health Center. The contents and opinions expressed, however, are those of the authors and do not necessarily reflect the policy or official position of either the Department or Office and no endorsement should be inferred or implied. The authors report no conflicts of interest. The meta-analysis described in this paper evaluated the relationships between 11 types of leadership practices and 7 organizational, teaming, leader, and employee outcomes. A main focus of analysis was whether the leadership practices were differentially related to the study outcomes. Studies were eligible for inclusion if the correlations between leadership subscale measures (rather than global measures of leadership) and outcomes of interest were reported. The random effects weighted average correlations between the independent and dependent measures were used as the sizes of effects for evaluating the influences of the leadership practices on the outcome measures. One hundred and twelve studies met the inclusion criteria and included 39,433 participants. The studies were conducted in 31 countries in different kinds of programs, organizations, companies, and businesses.
    [Show full text]
  • Summary of Human Subjects Protection Issues Related to Large Sample Surveys
    Summary of Human Subjects Protection Issues Related to Large Sample Surveys U.S. Department of Justice Bureau of Justice Statistics Joan E. Sieber June 2001, NCJ 187692 U.S. Department of Justice Office of Justice Programs John Ashcroft Attorney General Bureau of Justice Statistics Lawrence A. Greenfeld Acting Director Report of work performed under a BJS purchase order to Joan E. Sieber, Department of Psychology, California State University at Hayward, Hayward, California 94542, (510) 538-5424, e-mail [email protected]. The author acknowledges the assistance of Caroline Wolf Harlow, BJS Statistician and project monitor. Ellen Goldberg edited the document. Contents of this report do not necessarily reflect the views or policies of the Bureau of Justice Statistics or the Department of Justice. This report and others from the Bureau of Justice Statistics are available through the Internet — http://www.ojp.usdoj.gov/bjs Table of Contents 1. Introduction 2 Limitations of the Common Rule with respect to survey research 2 2. Risks and benefits of participation in sample surveys 5 Standard risk issues, researcher responses, and IRB requirements 5 Long-term consequences 6 Background issues 6 3. Procedures to protect privacy and maintain confidentiality 9 Standard issues and problems 9 Confidentiality assurances and their consequences 21 Emerging issues of privacy and confidentiality 22 4. Other procedures for minimizing risks and promoting benefits 23 Identifying and minimizing risks 23 Identifying and maximizing possible benefits 26 5. Procedures for responding to requests for help or assistance 28 Standard procedures 28 Background considerations 28 A specific recommendation: An experiment within the survey 32 6.
    [Show full text]
  • The Role of Qualitative Research in Science Education
    Eurasia Journal of Mathematics, Science & Technology Education, 2010, 6(1), 77-84 The Role of Qualitative Research in Science Education Iztok Devetak, Saša A. Glažar and Janez Vogrinc University of Ljubljana, Ljubljana, SLOVENIA Received 22 October 2008; accepted 21 January 2009 In the paper the qualitative research in which the researcher has been directly involved, and has himself been examining the research phenomenon in the studied environment, is presented. The aim of this qualitative study is to gather data in the form of rich content– based descriptions of people, events, and situations by using different, especially non– structural, techniques to discover the stakeholders’ views and similar, to orally analyze the gathered data, and finally to interpret the findings in the form of a concept or contextually dependent grounded theory. The main purpose of the paper is to identify research approaches used by authors who have published in respected international science education journals in the last three years. It can be concluded from the results that authors have been using qualitative and mixed research approaches in more than half of the published papers in the last three years in order to address the research questions in their studies. Keywords: Qualitative research, Science education, Document analysis, Journal analysis. INTRODUCTION should also give some clear answers or - in other words - should have a clear »take home« message (Bunce, Authors who publish papers in respected science 2008). education research journals always try to make some Research problems and research questions provide efforts to bridge the gap between science education an important guideline for the researcher in selecting the research results and conclusions and their applications appropriate research methodology or methods designs: in the educational process at all levels of education.
    [Show full text]