Dynamic Parameter Allocation in Parameter Servers

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Parameter Allocation in Parameter Servers Dynamic Parameter Allocation in Parameter Servers Alexander Renz-Wieland 1, Rainer Gemulla 2, Steffen Zeuch 1;3, Volker Markl 1;3 1Technische Universitat¨ Berlin, 2Universitat¨ Mannheim, 3German Research Center for Artificial Intelligence 1fi[email protected], [email protected], 3fi[email protected] ABSTRACT Classic PS (PS−Lite) To keep up with increasing dataset sizes and model complex- 4.5h ity, distributed training has become a necessity for large ma- 4h 200 chine learning tasks. Parameter servers ease the implemen- Classic PS tation of distributed parameter management|a key con- with fast local cern in distributed training|, but can induce severe com- access 2.4h munication overhead. To reduce communication overhead, 100 ● Dynamic Allocation PS (Lapse), distributed machine learning algorithms use techniques to 1.5h incl. fast local access increase parameter access locality (PAL), achieving up to 1.2h ● ● linear speed-ups. We found that existing parameter servers 0.6h ● 0 provide only limited support for PAL techniques, however, Epoch run time in minutes 0.4h 0.2h and therefore prevent efficient training. In this paper, we 1x4 2x4 4x4 8x4 explore whether and to what extent PAL techniques can Parallelism (nodes x threads) be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into pa- rameter servers, describe an efficient implementation of such Figure 1: Parameter server (PS) performance for a large knowledge graph embeddings task (RESCAL, dimen- a parameter server called Lapse, and experimentally com- pare its performance to existing parameter servers across a sion 100). The performance of the classic PSs falls behind the performance of a single node due to communication over- number of machine learning tasks. We found that Lapse provides near-linear scaling and can be orders of magnitude head. In contrast, dynamic parameter allocation enables faster than existing parameter servers. Lapse to scale near-linearly. Details in Section 4.1. PVLDB Reference Format: Alexander Renz-Wieland, Rainer Gemulla, Steffen Zeuch, Volker primitives or delegate parameter management to a parame- Markl. Dynamic Parameter Allocation in Parameter Servers. ter server (PS). PSs provide primitives for reading and writ- PVLDB, 13(11): 1877-1890, 2020. ing parameters and handle partitioning and synchronization DOI: https://doi.org/10.14778/3407790.3407796 across nodes. Many ML stacks use PSs as a component, e.g., TensorFlow [1], MXNet [8], PyTorch BigGraph [27], 1. INTRODUCTION STRADS [24], STRADS-AP [23], or Project Adam [9], and To keep up with increasing dataset sizes and model com- there exist multiple standalone PSs, e.g., Petuum [17], PS- plexity, distributed training has become a necessity for large Lite [28], Angel [22], FlexPS [18], Glint [20], and PS2 [64]. machine learning (ML) tasks. Distributed ML allows (1) for As parameters are accessed by multiple nodes in the clus- models and data larger than the memory of a single ma- ter and therefore need to be transferred between nodes, dis- chine, and (2) for faster training by leveraging distributed tributed ML algorithms may suffer from severe communica- compute. In distributed ML, both training data and model tion overhead when compared to single-machine implemen- parameters are partitioned across a compute cluster. Each tations. Figure 1 shows exemplarily that the performance node in the cluster usually accesses only its local part of the of a distributed ML algorithm may fall behind the perfor- training data, but reads and/or updates most of the model mance of single machine algorithms when a classic PS such parameters. Parameter management is thus a key concern as PS-Lite is used. To reduce the impact of communica- in distributed ML. Applications either manage model pa- tion, distributed ML algorithms employ techniques [14, 63, rameters manually using low-level distributed programming 56, 4, 27, 44, 62, 41, 33, 15, 36] that increase parameter access locality (PAL) and can achieve linear speed-ups. In- tuitively, PAL techniques ensure that most parameter ac- This work is licensed under the Creative Commons Attribution- cesses do not require (synchronous) communication; exam- NonCommercial-NoDerivatives 4.0 International License. To view a copy ple techniques include exploiting natural clustering of data, of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For parameter blocking, and latency hiding. Algorithms that any use beyond those covered by this license, obtain permission by emailing use PAL techniques typically manage parameters manually [email protected]. Copyright is held by the owner/author(s). Publication rights using low-level distributed programming primitives. licensed to the VLDB Endowment. Most existing PSs are easy to use|there is no need for Proceedings of the VLDB Endowment, Vol. 13, No. 11 ISSN 2150-8097. low-level distributed programming|, but provide only lim- DOI: https://doi.org/10.14778/3407790.3407796 ited support for PAL techniques. One limitation, for exam- 1877 Table 1: Per-key consistency guarantees of PS architec- server thread tures, using representatives for types: PS-Lite [28] for classic worker thread 1 and Petuum [59] for stale. worker thread 2 parameters worker thread 3 Parameter Server Classic Lapse Stale process at node 1 Synchronous sync async sync async sync, async Location caches off on server thread server thread worker thread 1 worker thread 1 Eventual XXXXXX a b b PRAM [30] XX XX × X worker thread 2 worker thread 2 b b Causal [19] XX XX × × parameters worker thread 3 parameters worker thread 3 b b Sequential [26] XX XX × × process at node 2 process at node 3 Serializability × × × × × × a I.e., monotonic reads, monotonic writes, and read your writes b Figure 2: PS architecture with server and worker threads Assuming that the network layer preserves message order (which is true for Lapse and PS-Lite) co-located in one process per node. Lapse employs this architecture. to what extent it is supported in existing PSs and iden- tify which features would be required to enable or improve ple, is that they allocate parameters statically. Moreover, support. Finally, we introduce DPA, which enables PSs to existing approaches to reducing communication overhead in exploit PAL techniques directly (Section 2.3). PSs provide only limited scalability compared to using PAL techniques (e.g., replication and bounded staleness [17, 11]) 2.1 Basic PS Architectures or are not applicable to the ML algorithms that we study PSs [53, 2, 12, 17, 28] partition the model parameters (e.g., dynamically reducing cluster size [18]). across a set of servers. The training data are usually parti- In this paper, we explore whether and to what extent tioned across a set of workers. During training, each worker PAL techniques can be supported in PSs, and whether such processes its local part of the training data (often multiple support is beneficial. To improve PS performance and suit- times) and continuously reads and updates model parame- ability, we propose to integrate dynamic parameter alloca- ters. To coordinate parameter accesses across workers, each tion (DPA) into PSs. DPA dynamically allocates parame- parameter is assigned a unique key and the PS provides pull ters where they are accessed, while providing location trans- and push primitives for reads and writes, respectively; cf. Ta- parency and PS consistency guarantees, i.e., sequential con- ble 2. Both operations can be performed synchronously or sistency. By doing so, PAL techniques can be exploited di- asynchronously. The push operation is usually cumulative, rectly. We discuss design options for PSs with DPA and de- i.e., the client sends an update term to the PS, which then scribe an efficient implementation of such a PS called Lapse. adds this term to the parameter value. Figure 1 shows the performance of Lapse for the task of Although servers and workers may reside on different ma- training knowledge graph embeddings [40] using data clus- chines, they are often co-located for efficiency reasons (es- tering and latency hiding PAL techniques. In contrast to pecially when PAL techniques are used). Some architec- classic PSs, Lapse outperformed the single-machine base- tures [28, 20, 22] run one server process and one or more line and showed near-linear speed-ups. In our experimen- worker processes on each machine, others [17, 18] use a tal study, we observed similar results for multiple other ML single process with one server thread and multiple worker tasks (matrix factorization and word vectors): the classic PS threads to reduce inter-process communication. Figure 2 approach barely outperformed the single-machine baseline, depicts such a PS architecture with one server and three whereas Lapse scaled near-linearly, with speed-ups of up to worker threads per node. two orders of magnitude compared to classic PSs and up In the classic PS architecture, parameters are statically to one order of magnitude compared to state-of-the-art PSs. allocated to servers (e.g., via a range partitioning of the pa- Figure 1 further shows that|although critical to the perfor- rameter keys) and there is no replication. Thus precisely one mance of Lapse|fast local access alone does not alleviate server holds the current value of a parameter, and this server the communication overhead of the classic PS approach. is used for all pull and push operations on this parameter. In summary, our contributions are as follows. (i) We ex- Classic PSs typically guarantee sequential consistency [26] amine whether and to what extent existing PSs support us- for operations on the same key. This means that (1) each ing PAL techniques to reduce communication overhead. (ii) worker's operations are executed in the order specified by We propose to integrate DPA into PSs to be able to sup- the worker, and (2) the result of any execution is equivalent port PAL techniques directly.
Recommended publications
  • A Recursive Formula for Moments of a Binomial Distribution Arp´ Ad´ Benyi´ ([email protected]), University of Massachusetts, Amherst, MA 01003 and Saverio M
    A Recursive Formula for Moments of a Binomial Distribution Arp´ ad´ Benyi´ ([email protected]), University of Massachusetts, Amherst, MA 01003 and Saverio M. Manago ([email protected]) Naval Postgraduate School, Monterey, CA 93943 While teaching a course in probability and statistics, one of the authors came across an apparently simple question about the computation of higher order moments of a ran- dom variable. The topic of moments of higher order is rarely emphasized when teach- ing a statistics course. The textbooks we came across in our classes, for example, treat this subject rather scarcely; see [3, pp. 265–267], [4, pp. 184–187], also [2, p. 206]. Most of the examples given in these books stop at the second moment, which of course suffices if one is only interested in finding, say, the dispersion (or variance) of a ran- 2 2 dom variable X, D (X) = M2(X) − M(X) . Nevertheless, moments of order higher than 2 are relevant in many classical statistical tests when one assumes conditions of normality. These assumptions may be checked by examining the skewness or kurto- sis of a probability distribution function. The skewness, or the first shape parameter, corresponds to the the third moment about the mean. It describes the symmetry of the tails of a probability distribution. The kurtosis, also known as the second shape pa- rameter, corresponds to the fourth moment about the mean and measures the relative peakedness or flatness of a distribution. Significant skewness or kurtosis indicates that the data is not normal. However, we arrived at higher order moments unintentionally.
    [Show full text]
  • Section 7 Testing Hypotheses About Parameters of Normal Distribution. T-Tests and F-Tests
    Section 7 Testing hypotheses about parameters of normal distribution. T-tests and F-tests. We will postpone a more systematic approach to hypotheses testing until the following lectures and in this lecture we will describe in an ad hoc way T-tests and F-tests about the parameters of normal distribution, since they are based on a very similar ideas to confidence intervals for parameters of normal distribution - the topic we have just covered. Suppose that we are given an i.i.d. sample from normal distribution N(µ, ν2) with some unknown parameters µ and ν2 : We will need to decide between two hypotheses about these unknown parameters - null hypothesis H0 and alternative hypothesis H1: Hypotheses H0 and H1 will be one of the following: H : µ = µ ; H : µ = µ ; 0 0 1 6 0 H : µ µ ; H : µ < µ ; 0 ∼ 0 1 0 H : µ µ ; H : µ > µ ; 0 ≈ 0 1 0 where µ0 is a given ’hypothesized’ parameter. We will also consider similar hypotheses about parameter ν2 : We want to construct a decision rule α : n H ; H X ! f 0 1g n that given an i.i.d. sample (X1; : : : ; Xn) either accepts H0 or rejects H0 (accepts H1). Null hypothesis is usually a ’main’ hypothesis2 X in a sense that it is expected or presumed to be true and we need a lot of evidence to the contrary to reject it. To quantify this, we pick a parameter � [0; 1]; called level of significance, and make sure that a decision rule α rejects H when it is2 actually true with probability �; i.e.
    [Show full text]
  • 11. Parameter Estimation
    11. Parameter Estimation Chris Piech and Mehran Sahami May 2017 We have learned many different distributions for random variables and all of those distributions had parame- ters: the numbers that you provide as input when you define a random variable. So far when we were working with random variables, we either were explicitly told the values of the parameters, or, we could divine the values by understanding the process that was generating the random variables. What if we don’t know the values of the parameters and we can’t estimate them from our own expert knowl- edge? What if instead of knowing the random variables, we have a lot of examples of data generated with the same underlying distribution? In this chapter we are going to learn formal ways of estimating parameters from data. These ideas are critical for artificial intelligence. Almost all modern machine learning algorithms work like this: (1) specify a probabilistic model that has parameters. (2) Learn the value of those parameters from data. Parameters Before we dive into parameter estimation, first let’s revisit the concept of parameters. Given a model, the parameters are the numbers that yield the actual distribution. In the case of a Bernoulli random variable, the single parameter was the value p. In the case of a Uniform random variable, the parameters are the a and b values that define the min and max value. Here is a list of random variables and the corresponding parameters. From now on, we are going to use the notation q to be a vector of all the parameters: Distribution Parameters Bernoulli(p) q = p Poisson(l) q = l Uniform(a,b) q = (a;b) Normal(m;s 2) q = (m;s 2) Y = mX + b q = (m;b) In the real world often you don’t know the “true” parameters, but you get to observe data.
    [Show full text]
  • A Widely Applicable Bayesian Information Criterion
    JournalofMachineLearningResearch14(2013)867-897 Submitted 8/12; Revised 2/13; Published 3/13 A Widely Applicable Bayesian Information Criterion Sumio Watanabe [email protected] Department of Computational Intelligence and Systems Science Tokyo Institute of Technology Mailbox G5-19, 4259 Nagatsuta, Midori-ku Yokohama, Japan 226-8502 Editor: Manfred Opper Abstract A statistical model or a learning machine is called regular if the map taking a parameter to a prob- ability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. In regular statistical models, the Bayes free energy, which is defined by the minus logarithm of Bayes marginal likelihood, can be asymptotically approximated by the Schwarz Bayes information criterion (BIC), whereas in singular models such approximation does not hold. Recently, it was proved that the Bayes free energy of a singular model is asymptotically given by a generalized formula using a birational invariant, the real log canonical threshold (RLCT), instead of half the number of parameters in BIC. Theoretical values of RLCTs in several statistical models are now being discovered based on algebraic geometrical methodology. However, it has been difficult to estimate the Bayes free energy using only training samples, because an RLCT depends on an unknown true distribution. In the present paper, we define a widely applicable Bayesian information criterion (WBIC) by the average log likelihood function over the posterior distribution with the inverse temperature 1/logn, where n is the number of training samples. We mathematically prove that WBIC has the same asymptotic expansion as the Bayes free energy, even if a statistical model is singular for or unrealizable by a statistical model.
    [Show full text]
  • Statistic: a Quantity That We Can Calculate from Sample Data That Summarizes a Characteristic of That Sample
    STAT 509 – Section 4.1 – Estimation Parameter: A numerical characteristic of a population. Examples: Statistic: A quantity that we can calculate from sample data that summarizes a characteristic of that sample. Examples: Point Estimator: A statistic which is a single number meant to estimate a parameter. It would be nice if the average value of the estimator (over repeated sampling) equaled the target parameter. An estimator is called unbiased if the mean of its sampling distribution is equal to the parameter being estimated. Examples: Another nice property of an estimator: we want it to be as precise as possible. The standard deviation of a statistic’s sampling distribution is called the standard error of the statistic. The standard error of the sample mean Y is / n . Note: As the sample size gets larger, the spread of the sampling distribution gets smaller. When the sample size is large, the sample mean varies less across samples. Evaluating an estimator: (1) Is it unbiased? (2) Does it have a small standard error? Interval Estimates • With a point estimate, we used a single number to estimate a parameter. • We can also use a set of numbers to serve as “reasonable” estimates for the parameter. Example: Assume we have a sample of size n from a normally distributed population. Y T We know: s / n has a t-distribution with n – 1 degrees of freedom. (Exactly true when data are normal, approximately true when data non-normal but n is large.) Y P(t t ) So: 1 – = n1, / 2 s / n n1, / 2 = where tn–1, /2 = the t-value with /2 area to the right (can be found from Table 2) This formula is called a “confidence interval” for .
    [Show full text]
  • THE ONE-SAMPLE Z TEST
    10 THE ONE-SAMPLE z TEST Only the Lonely Difficulty Scale ☺ ☺ ☺ (not too hard—this is the first chapter of this kind, but youdistribute know more than enough to master it) or WHAT YOU WILL LEARN IN THIS CHAPTERpost, • Deciding when the z test for one sample is appropriate to use • Computing the observed z value • Interpreting the z value • Understandingcopy, what the z value means • Understanding what effect size is and how to interpret it not INTRODUCTION TO THE Do ONE-SAMPLE z TEST Lack of sleep can cause all kinds of problems, from grouchiness to fatigue and, in rare cases, even death. So, you can imagine health care professionals’ interest in seeing that their patients get enough sleep. This is especially the case for patients 186 Copyright ©2020 by SAGE Publications, Inc. This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher. Chapter 10 ■ The One-Sample z Test 187 who are ill and have a real need for the healing and rejuvenating qualities that sleep brings. Dr. Joseph Cappelleri and his colleagues looked at the sleep difficul- ties of patients with a particular illness, fibromyalgia, to evaluate the usefulness of the Medical Outcomes Study (MOS) Sleep Scale as a measure of sleep problems. Although other analyses were completed, including one that compared a treat- ment group and a control group with one another, the important analysis (for our discussion) was the comparison of participants’ MOS scores with national MOS norms. Such a comparison between a sample’s mean score (the MOS score for par- ticipants in this study) and a population’s mean score (the norms) necessitates the use of a one-sample z test.
    [Show full text]
  • Chapter 9 Sampling Distributions Parameter – Number That Describes
    Chapter 9 Sampling distributions Parameter – number that describes the population A parameter is a fixed number, but in reality we do not know its value because we can not examine the entire population. Statistic – number that describes a sample Use statistic to estimate an unknown parameter. μ = mean of population x = mean of the sample Sampling variability – the differences in each sample mean P(p-hat) - the proportion of the sample 160 out of 515 people believe in ghosts P(hat) = = .31 .31 is a statistic Proportion of US adults – parameter Sampling variability Take a large number of samples from the same population Calculate x or p(hat) for each sample Make a histogram of x(bar) or p(hat) Examine the distribution displayed in the histogram for shape, center, and spread as well as outliers and other deviations Use simulations for multiple samples – much cheaper than using actual samples Sampling distribution of a statistic is the distribution of values taken by the statistics in all possible samples of the same size from the same population Describing sample distributions Ex 9.5 page 494-495, 496 Bias of a statistic Unbiased statistic- a statistic used to estimate a parameter is unbiased if the mean of its sampling distribution is equal to the true value of the parameter being estimated. Ex 9.6 page 498 Variability of a statistic is described by the spread of its sampling distribution. The spread is determined by the sampling design and the size of the sample. Larger samples give smaller spreads!! Homework read pages 488 – 503 do problems
    [Show full text]
  • Skewness Vs. Kurtosis: Implications for Pricing and Hedging Options
    Skewness vs. Kurtosis: Implications for Pricing and Hedging Options Sol Kim College of Business Hankuk University of Foreign Studies 270, Imun-dong, Dongdaemun-Gu, Seoul, Korea Tel: +82-2-2173-3124 Fax: +82-2-959-4645 E-mail: [email protected] Abstract For S&P 500 options, we examine the relative influence of the skewness and kurtosis of the risk- neutral distribution on pricing and hedging performances. Both the nonparametric method suggested by Bakshi, Kapadia and Madan (2003) and the parametric method suggested by Corrado and Su (1996) are used to estimate the risk-neutral skewness and kurtosis. We find that skewness exerts a greater impact on pricing and hedging errors than kurtosis does. The option pricing model that considers skewness shows better performance for pricing and hedging the options than does the model that considers kurtosis. All the results are statistically significant and robust to all sub-periods, which confirms that the risk-neutral skewness is a more important factor than the risk- neutral kurtosis for pricing and hedging stock index options. JEL classification: G13 Keywords: Volatility Smiles; Options Pricing; Risk-neutral Distribution; Skewness; Kurtosis Skewness vs. Kurtosis: Implications for Pricing and Hedging Options Abstract For S&P 500 options, we examine the relative influence of the skewness and kurtosis of the risk- neutral distribution on pricing and hedging performances. Both the nonparametric method suggested by Bakshi, Kapadia and Madan (2003) and the parametric method suggested by Corrado and Su (1996) are used to estimate the risk-neutral skewness and kurtosis. We find that skewness exerts a greater impact on pricing and hedging errors than kurtosis does.
    [Show full text]
  • Chapter 4 Parameter Estimation
    Chapter 4 Parameter Estimation Thus far we have concerned ourselves primarily with probability theory: what events may occur with what probabilities, given a model family and choices for the parameters. This is useful only in the case where we know the precise model family and parameter values for the situation of interest. But this is the exception, not the rule, for both scientific inquiry and human learning & inference. Most of the time, we are in the situation of processing data whose generative source we are uncertain about. In Chapter 2 we briefly covered elemen- tary density estimation, using relative-frequency estimation, histograms and kernel density estimation. In this chapter we delve more deeply into the theory of probability density es- timation, focusing on inference within parametric families of probability distributions (see discussion in Section 2.11.2). We start with some important properties of estimators, then turn to basic frequentist parameter estimation (maximum-likelihood estimation and correc- tions for bias), and finally basic Bayesian parameter estimation. 4.1 Introduction Consider the situation of the first exposure of a native speaker of American English to an English variety with which she has no experience (e.g., Singaporean English), and the problem of inferring the probability of use of active versus passive voice in this variety with a simple transitive verb such as hit: (1) The ball hit the window. (Active) (2) The window was hit by the ball. (Passive) There is ample evidence that this probability is contingent
    [Show full text]
  • Ch. 2 Estimators
    Chapter Two Estimators 2.1 Introduction Properties of estimators are divided into two categories; small sample and large (or infinite) sample. These properties are defined below, along with comments and criticisms. Four estimators are presented as examples to compare and determine if there is a "best" estimator. 2.2 Finite Sample Properties The first property deals with the mean location of the distribution of the estimator. P.1 Biasedness - The bias of on estimator is defined as: Bias(!ˆ ) = E(!ˆ ) - θ, where !ˆ is an estimator of θ, an unknown population parameter. If E(!ˆ ) = θ, then the estimator is unbiased. If E(!ˆ ) ! θ then the estimator has either a positive or negative bias. That is, on average the estimator tends to over (or under) estimate the population parameter. A second property deals with the variance of the distribution of the estimator. Efficiency is a property usually reserved for unbiased estimators. ˆ ˆ 1 ˆ P.2 Efficiency - Let ! 1 and ! 2 be unbiased estimators of θ with equal sample sizes . Then, ! 1 ˆ ˆ ˆ is a more efficient estimator than ! 2 if var(! 1) < var(! 2 ). Restricting the definition of efficiency to unbiased estimators, excludes biased estimators with smaller variances. For example, an estimator that always equals a single number (or a constant) has a variance equal to zero. This type of estimator could have a very large bias, but will always have the smallest variance possible. Similarly an estimator that multiplies the sample mean by [n/(n+1)] will underestimate the population mean but have a smaller variance.
    [Show full text]
  • Estimators, Bias and Variance
    Deep Learning Srihari Machine Learning Basics: Estimators, Bias and Variance Sargur N. Srihari [email protected] This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/CSE676 1 Deep Learning Topics in Basics of ML Srihari 1. Learning Algorithms 2. Capacity, Overfitting and Underfitting 3. Hyperparameters and Validation Sets 4. Estimators, Bias and Variance 5. Maximum Likelihood Estimation 6. Bayesian Statistics 7. Supervised Learning Algorithms 8. Unsupervised Learning Algorithms 9. Stochastic Gradient Descent 10. Building a Machine Learning Algorithm 11. Challenges Motivating Deep Learning 2 Deep Learning Srihari Topics in Estimators, Bias, Variance 0. Statistical tools useful for generalization 1. Point estimation 2. Bias 3. Variance and Standard Error 4. Bias-Variance tradeoff to minimize MSE 5. Consistency 3 Deep Learning Srihari Statistics provides tools for ML • The field of statistics provides many tools to achieve the ML goal of solving a task not only on the training set but also to generalize • Foundational concepts such as – Parameter estimation – Bias – Variance • They characterize notions of generalization, over- and under-fitting 4 Deep Learning Srihari Point Estimation • Point Estimation is the attempt to provide the single best prediction of some quantity of interest – Quantity of interest can be: • A single parameter • A vector of parameters – E.g., weights in linear regression • A whole function 5 Deep Learning Srihari Point estimator or Statistic • To distinguish estimates of parameters
    [Show full text]
  • Population and Sample Practice 1. for Each Statement, Identify Whether
    Population and Sample Practice 1. For each statement, identify whether the numbers underlined are statistics or parameters. a. Of all U.S. kindergarten teachers, 32% say that knowing the alphabet is an essential skill. b. Of the 800 U.S. kindergarten teachers polled, 34% say that knowing the alphabet is an essential skill. 2. Of the U.S. adult population, 36% has an allergy. A sample of 1200 randomly selected adults resulted in 33.2% reporting an allergy. a. Who is the population? b. What is the sample? c. Identify the statistic and give its value. d. Identify the parameter and give its value. 3. In your own words, explain why the parameter is fixed and the statistic varies. 4. Select 90 students currently enrolled at NCSU and ask how many years they’ve attended the university, how old they are, and if they live on campus. a. What is the population? b. What is the sample? 5. Suppose a 12 year old asked you to explain the difference between a sample and a population, how would you explain it to him/her? How might you explain why you would want to take a sample, rather than surveying every member of the population? 6. In your own words, explain the difference between a statistic and a parameter. 7. A study reveals that there are exactly 100 Senators in the 109th Congress of the United States, and 55% of them are Republicans. a. Do the data comprise a sample or a population? b. Do the results represent a statistic or a parameter? 8.
    [Show full text]