Sampling Methods for Web and E-Mail Surveys

Total Page:16

File Type:pdf, Size:1020Kb

Sampling Methods for Web and E-Mail Surveys 11 Sampling Methods for Web and E-mail Surveys Ronald D. Fricker, Jr ABSTRACT by postal mail and telephone, which in This chapter is a comprehensive overview of the aggregate we refer to as ‘traditional’ sampling methods for web and e-mail (‘Internet- surveys. based’) surveys. It reviews the various types of The chapter begins with a general overview sampling method – both probability and non- of sampling. Since there are many fine probability – and examines their applicability to textbooks on the mechanics and mathematics Internet-based surveys. Issues related to Internet- based survey sampling are discussed, including dif- of sampling, we restrict our discussion to ficulties assembling sampling frames for probability the main ideas that are necessary to ground sampling, coverage issues, and nonresponse and our discussion on sampling for Internet-based selection bias. The implications of the various survey surveys. Readers already well versed in the mode choices on statistical inference and analyses fundamentals of survey sampling may wish are summarized. to proceed directly to the section on Sampling Methods for Internet-based Surveys. INTRODUCTION In the context of conducting surveys or WHY SAMPLE? collecting data, sampling is the selection of a subset of a larger population to survey. Surveys are conducted to gather information This chapter focuses on sampling methods about a population. Sometimes the survey is for web and e-mail surveys, which taken conducted as a census, where the goal is to together we call ‘Internet-based’ surveys. survey every unit in the population. However, In our discussion we will frequently com- it is frequently impractical or impossible to pare sampling methods for Internet-based survey an entire population, perhaps owing surveys to various types of non-Internet- to either cost constraints or some other based surveys, such as those conducted practical constraint, such as that it may not [17:36 4/3/2008 5123-Fielding-Ch11.tex] Paper: a4 Job No: 5123 Fielding: Online Research Methods (Handbook) Page: 195 195–217 196 THE SAGE HANDBOOK OF ONLINE RESEARCH METHODS be possible to identify all the members of the The advantages of lower cost and less population. effort are obvious: keeping all else constant, An alternative to conducting a census is reducing the number of surveys should cost to select a sample from the population and less and take less effort to field and analyze. survey only those sampled units. As shown However, that a survey based on a sample in Figure 11.1, the idea is to draw a sample rather than a census can give better response from the population and use data collected rates and greater accuracy is less obvious. from the sample to infer information about Yet, greater survey accuracy can result when the entire population. To conduct statistical the sampling error is more than offset by inference (i.e., to be able to make quantitative a decrease in nonresponse and other biases, statements about the unobserved population perhaps due to increased response rates. That statistic), the sample must be drawn in such a is, for a fixed level of effort (or funding), a fashion that one can both calculate appropriate sample allows the surveying organization to sample statistics and estimate their standard put more effort into maximizing responses errors. To do this, as will be discussed in from those surveyed, perhaps via more effort this chapter, one must use a probability-based invested in survey design and pre-testing, sampling methodology. or perhaps via more detailed non-response A survey administered to a sample can follow-up. have a number of advantages over a census, What does all of this have to do with including: Internet-based surveys? Before the Internet, large surveys were generally expensive to • lower cost administer and hence survey professionals • less effort to administer gave careful thought to how to best conduct • better response rates a survey in order to maximize information • greater accuracy. accuracy while minimizing costs. However, Population Sample Unobserved population inference Sample statistic statistic Figure 11.1 An illustration of sampling. When it is impossible or infeasible to observe a population statistic directly, data from a sample appropriately drawn from the population can be used to infer information about the population [17:36 4/3/2008 5123-Fielding-Ch11.tex] Paper: a4 Job No: 5123 Fielding: Online Research Methods (Handbook) Page: 196 195–217 SAMPLING METHODS FOR WEB AND E-MAIL SURVEYS 197 as illustrated in Figure 11.2, the Internet Conducting surveys, as in all forms of data now provides easy access to a plethora collection, requires making compromises. of inexpensive survey software, as well as Specifically, there are almost always trade- to millions of potential survey respondents, offs to be made between the amount of data and it has lowered other costs and barriers that can be collected and the accuracy of to surveying. While this is good news for the data collected. Hence, it is critical for survey researchers, these same factors have researchers to have a firm grasp of the trade- also facilitated a proliferation of bad survey- offs they implicitly or explicitly make when research practice. choosing a sampling method for collecting For example, in an Internet-based survey their data. the marginal cost of collecting additional data can be virtually zero. At first blush, this seems to be an attractive argument in favour of AN OVERVIEW OF SAMPLING attempting to conduct censuses, or for sim- ply surveying large numbers of individuals There are many ways to draw samples without regard to how the individuals are from a population – and there are also recruited into the sample. And, in fact, these many ways that sampling can go awry. approaches are being used more frequently We intuitively think of a good sample as with Internet-based surveys, without much one that is representative of the population thought being given to alternative sampling from which the sample has been drawn. By strategies or to the potential impact such ‘representative’ we do not necessarily mean choices have on the accuracy of the survey the sample matches the population in terms results. The result is a proliferation of poorly of observable characteristics, but rather that conducted ‘censuses’ and surveys based on the results from the data we collect from large convenience samples that are likely to the sample are consistent with the results we yield less accurate information than a well- would have obtained if we had collected data conducted survey of a smaller sample. on the entire population. Figure 11.2 Banners for various Internet survey software (accessed January 2007) [17:36 4/3/2008 5123-Fielding-Ch11.tex] Paper: a4 Job No: 5123 Fielding: Online Research Methods (Handbook) Page: 197 195–217 198 THE SAGE HANDBOOK OF ONLINE RESEARCH METHODS Of course, the phrase ‘consistent with’ The survey sample then consists of those is vague and, if this was an exposition of members of the sampling frame that were the mathematics of sampling, would require chosen to be surveyed, and coverage error is a precise definition. However, we will not the difference between the frame population cover the details of survey sampling here.1 and the population of inference. Rather, in this section we will describe the The two most common approaches to various sampling methods and discuss the reducing coverage error are: main issues in characterizing the accuracy of a survey, with a particular focus on • terminology and definitions, in order that obtaining as complete a sampling frame as pos- sible (or employing a frameless sampling strategy we can put the subsequent discussion about in which most or all of the target population has Internet-based surveys in an appropriate a positive chance of being sampled); context. • post-stratifying to weight the survey sample to match the population of inference on some observed key characteristics. Sources of error in surveys The primary purpose of a survey is to gather Sampling error arises when a sample of the information about a population. However, target population is surveyed. It results from even when a survey is conducted as a census, the fact that different samples will generate the results can be affected by several sources different survey data. Roughly speaking, of error. A good survey design seeks to reduce assuming a random sample, sampling error is all types of error – not only the sampling reduced by increasing the sample size. error arising from surveying a sample of the Nonresponse errors occur when data is population. Table 11.1 below lists the four not collected on either entire respondents general categories of survey error as presented (unit nonresponse) or individual survey ques- and defined in Groves (1989) as part of his tions (item nonresponse). Groves (1989) calls ‘Total Survey Error’ approach. nonresponse ‘an error of nonobservation’.The Errors of coverage occur when some part response rate, which is the ratio of the number of the population cannot be included in the of survey respondents to the number sampled, sample. To be precise, Groves specifies three is often taken as a measure of how well different populations: the survey results can be generalized. Higher response rates are taken to imply a lower 1 The population of inference is the population likelihood of nonresponse bias. that the researcher ultimately intends to draw Measurement error arises when the survey conclusions about. response differs from the ‘true’ response. 2 The target population is the population of inference less various groups that the researcher For example, respondents may not answer has chosen to disregard. sensitive questions honestly for a variety 3 The frame population is that portion of the target of reasons, or respondents may misinterpret population which the survey materials or devices or make errors in answering questions.
Recommended publications
  • Sampling and Fieldwork Practices in Europe
    www.ssoar.info Sampling and Fieldwork Practices in Europe: Analysis of Methodological Documentation From 1,537 Surveys in Five Cross-National Projects, 1981-2017 Jabkowski, Piotr; Kołczyńska, Marta Veröffentlichungsversion / Published Version Zeitschriftenartikel / journal article Empfohlene Zitierung / Suggested Citation: Jabkowski, P., & Kołczyńska, M. (2020). Sampling and Fieldwork Practices in Europe: Analysis of Methodological Documentation From 1,537 Surveys in Five Cross-National Projects, 1981-2017. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 16(3), 186-207. https://doi.org/10.5964/meth.2795 Nutzungsbedingungen: Terms of use: Dieser Text wird unter einer CC BY Lizenz (Namensnennung) zur This document is made available under a CC BY Licence Verfügung gestellt. Nähere Auskünfte zu den CC-Lizenzen finden (Attribution). For more Information see: Sie hier: https://creativecommons.org/licenses/by/4.0 https://creativecommons.org/licenses/by/4.0/deed.de Diese Version ist zitierbar unter / This version is citable under: https://nbn-resolving.org/urn:nbn:de:0168-ssoar-74317-3 METHODOLOGY Original Article Sampling and Fieldwork Practices in Europe: Analysis of Methodological Documentation From 1,537 Surveys in Five Cross-National Projects, 1981-2017 Piotr Jabkowski a , Marta Kołczyńska b [a] Faculty of Sociology, Adam Mickiewicz University, Poznan, Poland. [b] Department of Socio-Political Systems, Institute of Political Studies of the Polish Academy of Science, Warsaw, Poland. Methodology, 2020, Vol. 16(3), 186–207, https://doi.org/10.5964/meth.2795 Received: 2019-03-23 • Accepted: 2019-11-08 • Published (VoR): 2020-09-30 Corresponding Author: Piotr Jabkowski, Szamarzewskiego 89C, 60-568 Poznań, Poland. +48 504063762, E-mail: [email protected] Abstract This article addresses the comparability of sampling and fieldwork with an analysis of methodological data describing 1,537 national surveys from five major comparative cross-national survey projects in Europe carried out in the period from 1981 to 2017.
    [Show full text]
  • E-Survey Methodology
    Chapter I E-Survey Methodology Karen J. Jansen The Pennsylvania State University, USA Kevin G. Corley Arizona State University, USA Bernard J. Jansen The Pennsylvania State University, USA ABSTRACT With computer network access nearly ubiquitous in much of the world, alternative means of data col- lection are being made available to researchers. Recent studies have explored various computer-based techniques (e.g., electronic mail and Internet surveys). However, exploitation of these techniques requires careful consideration of conceptual and methodological issues associated with their use. We identify and explore these issues by defining and developing a typology of “e-survey” techniques in organiza- tional research. We examine the strengths, weaknesses, and threats to reliability, validity, sampling, and generalizability of these approaches. We conclude with a consideration of emerging issues of security, privacy, and ethics associated with the design and implications of e-survey methodology. INTRODUCTION 1999; Oppermann, 1995; Saris, 1991). Although research over the past 15 years has been mixed on For the researcher considering the use of elec- the realization of these benefits (Kiesler & Sproull, tronic surveys, there is a rapidly growing body of 1986; Mehta & Sivadas, 1995; Sproull, 1986; Tse, literature addressing design issues and providing Tse, Yin, Ting, Yi, Yee, & Hong, 1995), for the laundry lists of costs and benefits associated with most part, researchers agree that faster response electronic survey techniques (c.f., Lazar & Preece, times and decreased costs are attainable benefits, 1999; Schmidt, 1997; Stanton, 1998). Perhaps the while response rates differ based on variables three most common reasons for choosing an e-sur- beyond administration mode alone.
    [Show full text]
  • ESOMAR 28 Questions
    Responses to ESOMAR 28 Questions INTRODUCTION Intro to TapResearch TapResearch connects mobile, tablet and pc users interested in completing surveys with market researchers who need their opinions. Through our partnerships with dozens of leading mobile apps, ad networks and websites, we’re able to reach an audience exceeding 100 million people in the United States. We are focused on being a top-quality partner for ad hoc survey sampling, panel recruitment, and router integrations. Our technology platform enables reliable feasibility estimates, highly competitive costs, sophisticated quality enforcement, and quick-turnaround project management. Intro to ESOMAR ESOMAR (European Society for Opinion and Market Research) is the essential organization for encouraging, advancing and elevating market research worldwide. Since 1948, ESOMAR’s aim has been to promote the value of market and opinion research in effective decision-making. The ICC/ESOMAR Code on Market and Social Research, which was developed jointly with the International Chamber of Commerce, sets out global guidelines for self-regulation for researchers and has been undersigned by all ESOMAR members and adopted or endorsed by more than 60 national market research associations worldwide. Responses to ESOMAR 28 Questions COMPANY PROFILE 1) What experience does your company have in providing online samples for market research? TapResearch connects mobile, tablet and pc users interested in completing surveys with market researchers who need their opinions. Through our partnerships with dozens of leading mobile apps, ad networks and websites, we’re able to reach an audience exceeding 100 million people in the United States - we’re currently adding about 30,000 panelists/day and this rate is increasing.
    [Show full text]
  • Esomar/Grbn Guideline for Online Sample Quality
    ESOMAR/GRBN GUIDELINE FOR ONLINE SAMPLE QUALITY ESOMAR GRBN ONLINE SAMPLE QUALITY GUIDELINE ESOMAR, the World Association for Social, Opinion and Market Research, is the essential organisation for encouraging, advancing and elevating market research: www.esomar.org. GRBN, the Global Research Business Network, connects 38 research associations and over 3500 research businesses on five continents: www.grbn.org. © 2015 ESOMAR and GRBN. Issued February 2015. This Guideline is drafted in English and the English text is the definitive version. The text may be copied, distributed and transmitted under the condition that appropriate attribution is made and the following notice is included “© 2015 ESOMAR and GRBN”. 2 ESOMAR GRBN ONLINE SAMPLE QUALITY GUIDELINE CONTENTS 1 INTRODUCTION AND SCOPE ................................................................................................... 4 2 DEFINITIONS .............................................................................................................................. 4 3 KEY REQUIREMENTS ................................................................................................................ 6 3.1 The claimed identity of each research participant should be validated. .................................................. 6 3.2 Providers must ensure that no research participant completes the same survey more than once ......... 8 3.3 Research participant engagement should be measured and reported on ............................................... 9 3.4 The identity and personal
    [Show full text]
  • SAMPLING DESIGN & WEIGHTING in the Original
    Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types.
    [Show full text]
  • Sampling Methods It’S Impractical to Poll an Entire Population—Say, All 145 Million Registered Voters in the United States
    Sampling Methods It’s impractical to poll an entire population—say, all 145 million registered voters in the United States. That is why pollsters select a sample of individuals that represents the whole population. Understanding how respondents come to be selected to be in a poll is a big step toward determining how well their views and opinions mirror those of the voting population. To sample individuals, polling organizations can choose from a wide variety of options. Pollsters generally divide them into two types: those that are based on probability sampling methods and those based on non-probability sampling techniques. For more than five decades probability sampling was the standard method for polls. But in recent years, as fewer people respond to polls and the costs of polls have gone up, researchers have turned to non-probability based sampling methods. For example, they may collect data on-line from volunteers who have joined an Internet panel. In a number of instances, these non-probability samples have produced results that were comparable or, in some cases, more accurate in predicting election outcomes than probability-based surveys. Now, more than ever, journalists and the public need to understand the strengths and weaknesses of both sampling techniques to effectively evaluate the quality of a survey, particularly election polls. Probability and Non-probability Samples In a probability sample, all persons in the target population have a change of being selected for the survey sample and we know what that chance is. For example, in a telephone survey based on random digit dialing (RDD) sampling, researchers know the chance or probability that a particular telephone number will be selected.
    [Show full text]
  • 5.1 Survey Frame Methodology
    Regional Course on Statistical Business Registers: Data sources, maintenance and quality assurance Perak, Malaysia 21-25 May, 2018 4 .1 Survey fram e methodology REVIEW For sampling purposes, a snapshot of the live register at a particular point in t im e is needed. The collect ion of active statistical units in the snapshot is referred to as a frozen frame. REVIEW A sampling frame for a survey is a subset of t he frozen fram e t hat includes units and characteristics needed for t he survey. A single frozen fram e should be used for all surveys in a given reference period Creat ing sam pling fram es SPECIFICATIONS Three m ain t hings need t o be specified t o draw appropriate sampling frames: ▸ Target population (which units?) ▸ Variables of interest ▸ Reference period CHOICE OF STATISTICAL UNIT Financial data Production data Regional data Ent erpri ses are Establishments or Establishments or typically the most kind-of-activity local units should be appropriate units to units are typically used if regional use for financial data. the most appropriate disaggregation is for production data. necessary. Typically a single t ype of unit is used for each survey, but t here are except ions where t arget populat ions include m ult iple unit t ypes. CHOICE OF STATISTICAL UNIT Enterprise groups are useful for financial analyses and for studying company strategies, but they are not normally the target populations for surveys because t hey are t oo diverse and unstable. SURVEYS OF EMPLOYMENT The sam pling fram es for t hese include all active units that are em ployers.
    [Show full text]
  • Impact of Demographic Factors on Impulse Buying Behavior of Consumers in Multan-Pakistan
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by International Institute for Science, Technology and Education (IISTE): E-Journals European Journal of Business and Management www.iiste.org ISSN 2222-1905 (Paper) ISSN 2222-2839 (Online) Vol.7, No.22, 2015 Impact of Demographic Factors on Impulse Buying Behavior of Consumers in Multan-Pakistan Prof. Dr. Abdul Ghafoor Awan Dean, Faculty of Management and Social Sciences, Institute of Southern Punjab, Multan Nayyar Abbas MS Scholar, Department of Business Administration, Institute of Southern Punjab, Multan Abstract The purpose of this study is to investigate the effect of demographic factors (gender, age, income, and education) on impulse buying behavior of consumers in Multan. The study adopted quantitative approach. A structured questionnaire was used to survey 250 respondents (104 males and 146 females) who were selected using non- probability convenience sampling. Data were collected in different educational institutions of Multan. Different statistical methods like multiple regression, chi-square test and simple descriptive techniques were used to derive results from the data collected with the help of SPSS 17.0. The multiple regression and chi-square test results revealed that gender and age were significantly and inversely associated with impulse buying behavior of consumers. The results further indicated that income and education were significantly and directly associated with impulse buying behavior of consumers. The ANOVA results indicated that consumers’ demographic characteristic had significant influence on impulse buying and demographic characteristics (gender, age, income, and education) affect simultaneously impulse buying behavior of consumers. The findings of the study were consistent and supported by previous studies.
    [Show full text]
  • 1988: Coverage Error in Establishment Surveys
    COVERAGE ERROR IN ESTABLISHMENT SURVEYS Carl A. Konschnik U.S. Bureau of the CensusI I. Definition of Coverage Error in the survey planning stage results in a sam- pled population which is too far removed from Coverage error which includes both under- the target population. Since estimates based coverage and overcoverage, is defined as "the on data drawn from the sampled population apply error in an estimate that results from (I) fail- properly only to the sampled population, inter- ure to include all units belonging to the est in the target population dictates that the defined population or failure to include speci- sampled population be as close as practicable fied units in the conduct of the survey to the target population. Nevertheless, in the (undercoverage), and (2) inclusion of some units following discussion of the sources, measure- erroneously either because of a defective frame ment, and control of coverage error, only or because of inclusion of unspecified units or deficiencies relative to the sampled population inclusion of specified units more than once in are included. Thus, when speaking of defective the actual survey (overcoverage)" (Office of frames, only those deficiencies are discussed Federal Statistical Policy and Standards, 1978). which arise when the population which is sampled Coverage errors are closely related to but differs from the population intended to be clearly distinct from content errors, which are sampled (the sampled population). defined as the "errors of observation or objec- tive measurement, of recording, of imputation, Coverage Error Source Categories or of other processing which results in associ- ating a wrong value of the characteristic with a We will now look briefly at the two cate- specified unit" (Office of Federal Statistical gories of coverage error--defective frames and Policy and Standards, 1978).
    [Show full text]
  • American Community Survey Design and Methodology (January 2014) Chapter 15: Improving Data Quality by Reducing Non-Sampling Erro
    American Community Survey Design and Methodology (January 2014) Chapter 15: Improving Data Quality by Reducing Non-Sampling Error Version 2.0 January 30, 2014 ACS Design and Methodology (January 2014) – Improving Data Quality Page ii [This page intentionally left blank] Version 2.0 January 30, 2014 ACS Design and Methodology (January 2014) – Improving Data Quality Page iii Table of Contents Chapter 15: Improving Data Quality by Reducing Non-Sampling Error ............................... 1 15.1 Overview .......................................................................................................................... 1 15.2 Coverage Error ................................................................................................................. 2 15.3 Nonresponse Error............................................................................................................ 3 15.4 Measurement Error ........................................................................................................... 6 15.5 Processing Error ............................................................................................................... 7 15.6 Census Bureau Statistical Quality Standards ................................................................... 8 15.7 References ........................................................................................................................ 9 Version 2.0 January 30, 2014 ACS Design and Methodology (January 2014) – Improving Data Quality Page iv [This page intentionally left
    [Show full text]
  • MRS Guidance on How to Read Opinion Polls
    What are opinion polls? MRS guidance on how to read opinion polls June 2016 1 June 2016 www.mrs.org.uk MRS Guidance Note: How to read opinion polls MRS has produced this Guidance Note to help individuals evaluate, understand and interpret Opinion Polls. This guidance is primarily for non-researchers who commission and/or use opinion polls. Researchers can use this guidance to support their understanding of the reporting rules contained within the MRS Code of Conduct. Opinion Polls – The Essential Points What is an Opinion Poll? An opinion poll is a survey of public opinion obtained by questioning a representative sample of individuals selected from a clearly defined target audience or population. For example, it may be a survey of c. 1,000 UK adults aged 16 years and over. When conducted appropriately, opinion polls can add value to the national debate on topics of interest, including voting intentions. Typically, individuals or organisations commission a research organisation to undertake an opinion poll. The results to an opinion poll are either carried out for private use or for publication. What is sampling? Opinion polls are carried out among a sub-set of a given target audience or population and this sub-set is called a sample. Whilst the number included in a sample may differ, opinion poll samples are typically between c. 1,000 and 2,000 participants. When a sample is selected from a given target audience or population, the possibility of a sampling error is introduced. This is because the demographic profile of the sub-sample selected may not be identical to the profile of the target audience / population.
    [Show full text]
  • 2021 RHFS Survey Methodology
    2021 RHFS Survey Methodology Survey Design For purposes of this document, the following definitions are provided: • Building—a separate physical structure identified by the respondent containing one or more units. • Property—one or more buildings owned by a single entity (person, group, leasing company, and so on). For example, an apartment complex may have several buildings but they are owned as one property. Target population: All rental housing properties in the United States, circa 2020. Sampling frame: The RHFS sample frame is a single frame based on a subset of the 2019 American Housing Survey (AHS) sample units. The RHFS frame included all 2019 AHS sample units that were identified as: 1. Rented or occupied without payment of rent. 2. Units that are owner occupied and listed as “for sale or rent”. 3. Vacant units for rent, for rent or sale, or rented but not yet occupied. By design, the RHFS sample frame excluded public housing and transient housing types (i.e. boat, RV, van, other). Public housing units are identified in the AHS through a match with the Department of Housing and Urban Development (HUD) administrative records. The RHFS frame is derived from the AHS sample, which is itself composed of housing units derived from the Census Bureau Master Address File. The AHS sample frame excludes group quarters housing. Group quarters are places where people live or stay in a group living arrangement. Examples include dormitories, residential treatment centers, skilled nursing facilities, correctional facilities, military barracks, group homes, and maritime or military vessels. As such, all of these types of group quarters housing facilities are, by design, excluded from the RHFS.
    [Show full text]