Methods of Selecting Samples in Multiple Surveys to Reduce Respondent Burden

Total Page:16

File Type:pdf, Size:1020Kb

Methods of Selecting Samples in Multiple Surveys to Reduce Respondent Burden METHODS OF SELECTING SAMPLES IN MULTIPLE SURVEYS TO REDUCE RESPONDENT BURDEN Charles R. Perry, Jameson C. Burt and William C. Iwig, National Agricultural Statistics Service Charles Perry, USDA/NASS, Room 305, 3251 Old Lee Highway, Fairfax, VA 22030 KEY WORDS: Respondent burden, multiple component, two methods of sampling are proposed surveys, stratified design that reduce this type of respondent burden. Historically, NASS has attempted to reduce Summary respondent burden and also reduce variance. In 1979, Tortora and Crank considered sampling with probability inversely proportional to burden. Noting The National Agricultural Statistics Service (NASS) a simultaneous gain in variance with a reduction in surveys the United States population of farm burden, NASS chose not to sample with probability operators numerous times each year. The list inversely proportional to burden. NASS has reduced components of these surveys are conducted using burden on the area frame component of its surveys. independent designs, each stratified differently. By There, a farmer sampled on one survey might be chance, NASS samples some farm operators in exempt from another survey, or farmers not key multiple surveys, producing a respondent burden to that survey might be sampled less intensely. concern. Two methods are proposed that reduce Statistical agencies in other countries have also this type of respondent burden. The first method approached respondent burden. For example, the uses linear integer programming to minimize the Netherlands Central Bureau of Statistics does some expected respondent burden. The second method co-ordinated or collocated sampling, ingeniously samples by any current sampling scheme, then, conditioning samples for one survey on previous within classes of similar farm operations, it surveys (de Ree, 1983). minimizes the number of times that NASS samples a farm operation for several surveys. Formal Description of Methods I and II The second method reduces the number of times that a respondent is contacted twice or more within a survey year by about 70 percent. The first method Method I is formally described by four basic tasks. will reduce this type of burden even further. (a) Cross-classify the population by the stratifi- cations used in the individual surveys. This Introduction produces the coarsest stratification of the pop- ulation that is a substratification of each indi- The National Agricultural Statistics Service (NASS) vidual stratification. surveys the United States population of farm operators numerous times each year. Some surveys (b) Proportionally allocate each of the individual are conducted quarterly, others are conducted stratified samples to the substrata. Use monthly and still others are conducted annually. random assignment between substrata where Each major survey uses a list dominant multiple necessary. frame design and an area frame component that (c) Apply integer linear programming within each accounts for that part of the population not on substratum to assign the samples to the labels the list frame. The list frame components of these of units belonging to the substratum so that surveys constitute a set of independent surveys, the respondent burden is minimized. each using a stratified simple random sample design with different strata definitions. With the current (d) Randomize the labels to the units of procedures some individual farm operators are substratum. The final assignment within each sampled for numerous surveys while other farm substratum is a simple random sample with operators with similar design characteristics are respect to each of the proportionally allocated hardly sampled at all. Within the list frame samples. Method II is formally described by four basic tasks. the original stratified sample allocations over the substratum. Originally, these allocations are (a) Using an equal probability of selection tech- random to each substratum, constrained only so nique within a stratum, select independent strat- that the substratum sample sizes sum to their ified samples for each survey. Notice that the stratum sample size. Second, for the first method equal probability of selection criterion permits the respondent burden is minimized over each efficient zonal sampling techniques on each sur- substrata by the linear programming step. vey within strata. Currently, within strata Regarding variance reduction, this means that if the samples are selected systematically with re- original sample was selected using simple random cords essentially in random order. sampling within each stratum, then the first method (b) Substratify the population by cross-classifying reduces respondent burden without any offsetting the individual farm units according to the increase in variance, since proportional allocation stratifications used in the individual surveys. is at least as efficient as simple random sampling. However, the first method would be less efficient (c) Randomly reassign within each substratum for variance than zonal sampling unrestricted by the samples associated with units having burden. But the second method, by reallocating excess respondent burden to units having less some zonal sampling units to reduce respondent respondent burden. burden, may only slightly increase variance over no (d) Iterate the reassignment process until it reallocation and then only when zonal sampling is minimizes the number of times that NASS effective. samples a farm operator for several surveys in the substratum. A Simple Simulation of Method I For both methods, define respondent burden by an index that represents the comparative burden on Method I reduces respondent burden in the following each individual sampling unit in the population. simulation of two surveys. Survey I samples n = 20 Each survey considered is assigned a burden value. from N = 110. Survey II also samples n = 20 from When a sampling unit is selected for multiple N = 110, though each of its strata has either a larger surveys, the burden index may be additive or or smaller population size (N:1 = 40 and N:2 = 70) some other functional form dependent on the than the corresponding strata of survey I (N1: = 30 individual survey burden values. Consequently, and N2: = 80). Here, the first subscript corresponds each sampling configuration is assigned a unique to the first survey, with its strata 1 and 2. Simi- respondent burden index. larly, the second subscript corresponds to the second survey. For example, N21 = 30 corresponds to the For any reasonable respondent burden index, the size of the population in stratum 2 of survey I and first method minimizes the expected respondent (1) in stratum 1 of survey II, whilen ¯21 = 3:75 corre- burden. This follows easily from the following sponds to the proportional allocation of survey I’s observations, where it is assumed that for each (1) stratum 2 sample, n2: = 10, to the population in of the original surveys an equal probability of both stratum 2 of survey I and stratum 1 of survey selection mechanism (epsm) is used within strata. II. First, from the independence of the original sample designs, it follows that for each individual unit the expected burden from the original stratified samples Survey I Survey II Stratum 1 Stratum 2 is equal to the expected respondent burden using N11 = 10 N12 = 20 N1¢ = 30 (1) (1) (1) proportional allocation followed by epsm sampling Stratum 1 n¹ = 3:33 n¹ = 6:67 n = 10 11 12 1¢ (2) (2) within substrata. Since the respondent burden n¹ = 2:5 n¹ = 2:85 12 over any population is the sum of the respondent 11 N21 = 30 N22 = 50 N2¢ = 80 (1) (1) (1) burden on the individuals of the population, the Stratum 2 n¹ = 3:75 n¹ = 6:25 n = 10 21 22 2¢ (2) (2) equality holds for the entire population or any n¹ = 7:5 n¹ = 7:15 21 22 subpopulation including the substrata. That is, (2) (2) N = 40 n = 10 N = 70 n = 10 the expected respondent burden over any arbitrary ¢1 ¢1 ¢2 ¢2 substratum for the proportionally allocated samples is equal to the expected respondent burden of With two surveys, at most we will sample a estimates. These are the same restrictions that respondent twice. For the above two surveys, apply to the permanent random number techniques without any proportional allocation, we simulated discussed by Ohlsson (1993). two independent stratified simple random samples 3 million times. These simulated samples produced, Method I on average, 3.6 double hits for the whole population Using this notation for Method I, we next describe a of 110 potential respondents, and four percent of sequence of simple data manipulation steps that can the simulations produced 7 or more double hits. be used operationally to perform tasks (a) through With the proportional allocations indicated in the (d) on page 1 for each of the K surveys. diagram for Method I, the population exceeds the total sample for both surveys in each substratum, Suppose that each unit, ui, of the population U has so no sampling unit needs to be selected for been stratified for each of the K surveys. Further both surveys. The high respondent burdens of suppose that this information has been entered into independent sampling are reduced to 0 double hits a file containing N records, so that the ith record with Method I! contains the stratification information for unit i. To be definitive, assume that the variable S(k) denotes the stratum classification code for survey k and Operational Description that S(k : i) denotes the value of the stratum classification code for unit ui. Basic Notation For each survey k (k = 1; 2;:::;K) perform the fol- lowing sequence of operations. N Let U = fuigi=1 be a finite population of size N. Suppose that U is surveyed on K occasions and that on each occasion a different independent stratified (a) Sort the data file by the variables S(k);:::; design is used. For these K stratified designs, denote S(K);S(1);:::;S(k ¡ 1). This will the survey occasion by k = 1; 2;:::K and let us use hierarchically arrange the records of the the following notation.
Recommended publications
  • Statistical Theory and Methodology for the Analysis of Microbial Compositions, with Applications
    Statistical Theory and Methodology for the Analysis of Microbial Compositions, with Applications by Huang Lin BS, Xiamen University, China, 2015 Submitted to the Graduate Faculty of the Graduate School of Public Health in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2020 UNIVERSITY OF PITTSBURGH GRADUATE SCHOOL OF PUBLIC HEALTH This dissertation was presented by Huang Lin It was defended on April 2nd 2020 and approved by Shyamal Das Peddada, PhD, Professor and Chair, Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh Jeanine Buchanich, PhD, Research Associate Professor, Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh Ying Ding, PhD, Associate Professor, Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh Matthew Rogers, PhD, Research Assistant Professor, Department of Surgery, UPMC Children's Hospital of Pittsburgh Hong Wang, PhD, Research Assistant Professor, Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh Dissertation Director: Shyamal Das Peddada, PhD, Professor and Chair, Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh ii Copyright c by Huang Lin 2020 iii Statistical Theory and Methodology for the Analysis of Microbial Compositions, with Applications Huang Lin, PhD University of Pittsburgh, 2020 Abstract Increasingly researchers are finding associations between the microbiome and human diseases such as obesity, inflammatory bowel diseases, HIV, and so on. Determining what microbes are significantly different between conditions, known as differential abundance (DA) analysis, and depicting the dependence structure among them, are two of the most challeng- ing and critical problems that have received considerable interest.
    [Show full text]
  • Analytic Inference in Finite Population Framework Via Resampling Arxiv
    Analytic inference in finite population framework via resampling Pier Luigi Conti Alberto Di Iorio Abstract The aim of this paper is to provide a resampling technique that allows us to make inference on superpopulation parameters in finite population setting. Under complex sampling designs, it is often difficult to obtain explicit results about su- perpopulation parameters of interest, especially in terms of confidence intervals and test-statistics. Computer intensive procedures, such as resampling, allow us to avoid this problem. To reach the above goal, asymptotic results about empirical processes in finite population framework are first obtained. Then, a resampling procedure is proposed, and justified via asymptotic considerations. Finally, the results obtained are applied to different inferential problems and a simulation study is performed to test the goodness of our proposal. Keywords: Resampling, finite populations, H´ajekestimator, empirical process, statistical functionals. arXiv:1809.08035v1 [stat.ME] 21 Sep 2018 1 Introduction The use of superpopulation models in survey sampling has a long history, going back (at least) to [8], where the limits of assuming the population characteristics as fixed, especially in economic and social studies, are stressed. As clearly appears, for instance, from [30] and [26], there are basically two types of inference in the finite populations setting. The first one is descriptive or enumerative inference, namely inference about finite population parameters. This kind of inference is a static \picture" on the current state of a population, and does not take into account the mechanism generating the characters of interest of the population itself. The second one is analytic inference, and consists in inference on superpopulation parameters.
    [Show full text]
  • Sampling and Evaluation
    Sampling and Evaluation A Guide to Sampling for Program Impact Evaluation Peter M. Lance Aiko Hattori Suggested citation: Lance, P. and A. Hattori. (2016). Sampling and evaluation: A guide to sampling for program impact evaluation. Chapel Hill, North Carolina: MEASURE Evaluation, University of North Carolina. Sampling and Evaluation A Guide to Sampling for Program Impact Evaluation Peter M. Lance, PhD, MEASURE Evaluation Aiko Hattori, PhD, MEASURE Evaluation ISBN: 978-1-943364-94-7 MEASURE Evaluation This publication was produced with the support of the United States University of North Carolina at Chapel Agency for International Development (USAID) under the terms of Hill MEASURE Evaluation cooperative agreement AID-OAA-L-14-00004. 400 Meadowmont Village Circle, 3rd MEASURE Evaluation is implemented by the Carolina Population Center, University of North Carolina at Chapel Hill in partnership with Floor ICF International; John Snow, Inc.; Management Sciences for Health; Chapel Hill, NC 27517 USA Palladium; and Tulane University. Views expressed are not necessarily Phone: +1 919-445-9350 those of USAID or the United States government. MS-16-112 [email protected] www.measureevaluation.org Dedicated to Anthony G. Turner iii Contents Acknowledgments v 1 Introduction 1 2 Basics of Sample Selection 3 2.1 Basic Selection and Sampling Weights . 5 2.2 Common Sample Selection Extensions and Complications . 58 2.2.1 Multistage Selection . 58 2.2.2 Stratification . 62 2.2.3 The Design Effect, Re-visited . 64 2.2.4 Hard to Find Subpopulations . 64 2.2.5 Large Clusters and Size Sampling . 67 2.3 Complications to Weights . 69 2.3.1 Non-Response Adjustment .
    [Show full text]
  • Sampling Handout
    SAMPLING SIMPLE RANDOM SAMPLING – A sample in which all population members have the same probability of being selected and the selection of each member is independent of the selection of all other members. SIMPLE RANDOM SAMPLING (RANDOM SAMPLING): Selecting a group of subjects (a sample) for study from a larger group (population) so that each individual (or other unit of analysis) is chosen entirely by chance. When used without qualifications (such as stratified random sampling), random sampling means “simple random sampling.” Also sometimes called “equal probability sample,” because every member of the population has an equal probability (chance) of being included in the sample. A random sample is not the same thing as a haphazard or accidental sample. Using random sampling reduces the likelihood of bias. SYSTEMATIC SAMPLING – A procedure for selecting a probability sample in which every kth member of the population is selected and in which 1/k is the sampling fraction. SYSTEMATIC SAMPLING: A sample obtained by taking every ”nth” subject or case from a list containing the total population (or sampling frame). The size of the n is calculated by dividing the desired sample size into the population size. For example, if you wanted to draw a systematic sample of 1,000 individuals from a telephone directory containing 100,000 names, you would divide 1,000 into 100,000 to get 100; hence, you would select every 100th name from the directory. You would start with a randomly selected number between 1 and 100, say 47, and then select the 47th name, the 147th, the 247th, the 347th, and so on.
    [Show full text]
  • Target Population” – Do Not Use “Sample Population.”
    NCES Glossary Analysis of Variance: Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to test or analyze the differences among group means in a sample. This technique is operated by modeling the value of the dependent variable Y as a function of an overall mean “µ”, related independent variables X1…Xn, and a residual “e” that is an independent normally distributed random variable. Bias (due to nonresponse): The difference that occurs when respondents differ as a group from non-respondents on a characteristic being studied. Bias (of an estimate): The difference between the expected value of a sample estimate and the corresponding true value for the population. Bootstrap: See Replication techniques. CAPI: Computer Assisted Personal Interviewing enables data collection staff to use portable microcomputers to administer a data collection form while viewing the form on the computer screen. As responses are entered directly into the computer, they are used to guide the interview and are automatically checked for specified range, format, and consistency edits. CATI: Computer Assisted Telephone Interviewing uses a computer system that allows a telephone interviewer to administer a data collection form over the phone while viewing the form on a computer screen. As the interviewer enters responses directly into the computer, the responses are used to guide the interview and are automatically checked for specified range, format, and consistency edits. Cluster: a group of similar people, things or objects positioned or occurring closely together. Cluster Design: Cluster Design refers to a type of sampling method in which the researcher divides the population into separate groups, of similarities - called clusters.
    [Show full text]
  • Finite Population Correction Methods
    Finite Population Correction Methods Moses Obiri May 5, 2017 Contents 1 Introduction 1 2 Normal-based Confidence Interval 2 3 Bootstrap Confidence Interval 3 4 Finite Population Bootstrap Sampling 5 4.1 Modified Resample Size. .5 4.2 Population and Superpopulation Bootstrap . .6 5 Simulation Study 8 6 Results and Discussion 9 6.1 Results: Coverage Rate and Mean Width . .9 6.2 Discussion . 14 7 Improvement and Future Work 15 8 References 15 1 Introduction One application of inferential statistics is to use a simple random sample of size n from a population to learn about the characteristics of such population. A very common inference is that of the population mean. The sample mean is typically used as a point estimate for the population mean. The uncertainty in using a point estimate is addressed by means of confidence intervals. Con- fidence intervals provide us with a range of values for the unknown population mean along with the precision of the method. For the parametric approach, the central limit theorem allows us to construct t-based confidence intervals for large sample sizes or symmetric populations. Over the years, nonparametric computer-intensive bootstrap methods have become popular for constructing confidence intervals as well. 1 The central limit, the standard error of the sample mean and traditional bootstrap methods are based on the principle that samples are selected with replacement or that sampling is done without replacement from an infinite pop- ulation. In most research surveys, however, sampling is done from a finite population of size N. When we sample from an infinite population or sample with replacement; then selecting one unit does not affect the probability of se- lecting the same or another unit.
    [Show full text]
  • 06 Sampling Plan For
    The Journal of Industrial Statistics (2014), 3 (1), 120 - 139 120 A Resource Based Sampling Plan for ASI B. B. Singh1, National Sample Survey Office, FOD, New Delhi, India Abstract Annual Survey of Industries (ASI), despite having the mandates of self-compilation of returns by the units selected for the survey, most of the units require the support and expertise of the field functionaries for compilation. Responsibility of the survey for central sample including the census units rests with the Field Operations Division (FOD) of National Sample Survey Office (NSSO). The new sampling plan for the ASI envisages uniform sampling fraction for the sample units for the strata at State X district X sector X 4 digit NIC level, irrespective of the number of population units in each of the strata. Many strata have comparatively smaller number of population units requiring larger sampling fraction for better precision of estimates. On the other hand, a sizeable number of Regional Offices having the jurisdiction over a number of districts usually gets large allocation of sample units in individual strata beyond their managerial capacity with respect to availability of field functionaries and the work load, leading to increased non sampling errors. A plan based on varying sampling fraction ensuring a certain level of significance may result less number of units in these regions however still ensuring the estimates at desired precision. The sampling fraction in other strata having less number of population units could be increased so as to enhance the precision of the estimates in those strata. The latest ASI frames of units have been studied and a suitable sampling fraction has been suggested in the paper.
    [Show full text]
  • Chapter 4 Stratified Sampling
    Chapter 4 Stratified Sampling An important objective in any estimation problem is to obtain an estimator of a population parameter which can take care of the salient features of the population. If the population is homogeneous with respect to the characteristic under study, then the method of simple random sampling will yield a homogeneous sample, and in turn, the sample mean will serve as a good estimator of the population mean. Thus, if the population is homogeneous with respect to the characteristic under study, then the sample drawn through simple random sampling is expected to provide a representative sample. Moreover, the variance of the sample mean not only depends on the sample size and sampling fraction but also on the population variance. In order to increase the precision of an estimator, we need to use a sampling scheme which can reduce the heterogeneity in the population. If the population is heterogeneous with respect to the characteristic under study, then one such sampling procedure is a stratified sampling. The basic idea behind the stratified sampling is to divide the whole heterogeneous population into smaller groups or subpopulations, such that the sampling units are homogeneous with respect to the characteristic under study within the subpopulation and heterogeneous with respect to the characteristic under study between/among the subpopulations. Such subpopulations are termed as strata. Treat each subpopulation as a separate population and draw a sample by SRS from each stratum. [Note: ‘Stratum’ is singular and ‘strata’ is plural]. Example: In order to find the average height of the students in a school of class 1 to class 12, the height varies a lot as the students in class 1 are of age around 6 years, and students in class 10 are of age around 16 years.
    [Show full text]
  • Piazza Chapter on "Fundamentals of Applied Sampling"
    1 Chapter 5 Fundamentals of Applied Sampling Thomas Piazza 5.1 The Basic Idea of Sampling Survey sampling is really quite remarkable. In research we often want to know certain characteristics of a large population, but we are almost never able to do a complete census of it. So we draw a sample—a subset of the population—and conduct research on that relatively small subset. Then we generalize the results, with an allowance for sampling error, to the entire population from which the sample was selected. How can this be justified? The capacity to generalize sample results to an entire population is not inherent in just any sample. If we interview people in a “convenience” sample—those passing by on the street, for example—we cannot be confident that a census of the population would yield similar results. To have confidence in generalizing sample results to the whole population requires a “probability sample” of the population. This chapter presents a relatively non-technical explanation of how to draw a probability sample. Key Principles of Probability Sampling When planning to draw a sample, we must do several basic things: 1. Define carefully the population to be surveyed. Do we want to generalize the sample result to a particular city? Or to an entire nation? Or to members of a professional group or some other organization? It is important to be clear about our intentions. Often it may not be realistic to attempt to select a survey sample from the whole population we ideally would like to study. In that case it is useful to distinguish between the entire population of interest (e.g., all adults in the U.S.) and the population we will actually attempt to survey (e.g., adults living in households in the continental U.S., with a landline telephone in the home).
    [Show full text]
  • Estimating and Comparing Richness with Incidence Data and Incomplete Sampling
    Statistics & Operations Research Transactions Statistics & Operations Research SORT 41 (1) January-June 2017, 3-54 © Institut d’Estad´ısticaTransactions de Catalunya ISSN: 1696-2281 [email protected] eISSN: 2013-8830 www.idescat.cat/sort/ Thirty years of progeny from Chao’s inequality: Estimating and comparing richness with incidence data and incomplete sampling Anne Chao1,∗ and Robert K. Colwell2,3,4 Abstract In the context of capture-recapture studies, Chao (1987) derived an inequality among capture frequency counts to obtain a lower bound for the size of a population based on individuals’ capture/non-capture records for multiple capture occasions. The inequality has been applied to obtain a non-parametric lower bound of species richness of an assemblage based on species incidence (detection/non-detection) data in multiple sampling units. The inequality implies that the number of undetected species can be inferred from the species incidence frequency counts of the uniques (species detected in only one sampling unit) and duplicates (species detected in exactly two sampling units). In their pioneering pa- per, Colwell and Coddington (1994) gave the name “Chao2” to the estimator for the resulting species richness. (The “Chao1” estimator refers to a similar type of estimator based on species abundance data). Since then, the Chao2 estimator has been applied to many research fields and led to fruitful generalizations. Here, we first review Chao’s inequality under various models and discuss some re- lated statistical inference questions: (1) Under
    [Show full text]
  • Abundance-Based Similarity Indices and Their Estimation When There Are Unseen Species in Samples
    Biometrics 62, 361–371 DOI: 10.1111/j.1541-0420.2005.00489.x June 2006 Abundance-Based Similarity Indices and Their Estimation When There Are Unseen Species in Samples 1, 2 2 3 Anne Chao, ∗ Robin L. Chazdon, Robert K. Colwell, and Tsung-Jen Shen 1Institute of Statistics, National Tsing Hua University, Hsin-Chu 30043, Taiwan 2Department of Ecology and Evolutionary Biology, University of Connecticut, Storrs, Connecticut 06269-3043, U.S.A. 3Department of Applied Mathematics, National Chung Hsing University, Tai-Chung 402, Taiwan ∗email: [email protected] Summary. A wide variety of similarity indices for comparing two assemblages based on species incidence (i.e., presence/absence) data have been proposed in the literature. These indices are generally based on three simple incidence counts: the number of species shared by two assemblages and the number of species unique to each of them. We provide a new probabilistic derivation for any incidence-based index that is symmetric (i.e., the index is not affected by the identity ordering of the two assemblages) and homogeneous (i.e., the index is unchanged if all counts are multiplied by a constant). The probabilistic approach is further extended to formulate abundance-based indices. Thus any symmetric and homogeneous incidence index can be easily modified to an abundance-type version. Applying the Laplace approximation formulas, we propose estimators that adjust for the effect of unseen shared species on our abundance-based indices. Simulation results show that the adjusted estimators significantly reduce the biases of the corresponding unadjusted ones when a substantial fraction of species is missing from samples.
    [Show full text]
  • MRIP Survey Design and Statistical Methods
    National Marine Fisheries Service’s Marine Recreational Information Program Survey Design and Statistical Methods for Estimation of Recreational Fisheries Catch and Effort Prepared by: Katherine J. Papacostas, ECS Federal, LLC and John Foster, NOAA-NMFS Acknowledgements The following members of MRIP’s Research and Evaluation Team and Survey Operations Team are gratefully thanked for their review of, and contributions to, this report: Rob Andrews, Lauren Dolinger Few, Yong-Woo Lee, Daemian Schrieber, Tom Sminkey, Lucas Johansen, Ryan Kitts-Jensen, and Dave Van Voorhees. Table of Contents 1. Introduction ........................................................................................................................ 1 2. The Access Point Angler Intercept Survey (APAIS): Catch Rates .............................. 7 2.1 Sampling Design ................................................................................................................. 7 2.1.1 Stratification Variables .................................................................................................... 7 2.1.2 Design Stages ................................................................................................................... 9 2.1.3 Sample Frame: Public Access Fishing Site Register .................................................... 10 2.1.4 Clustering Methods ........................................................................................................ 10 2.1.5 Sample Selection ...........................................................................................................
    [Show full text]