Estimation Steps Point Estimators Statistics Point Estimates

Total Page:16

File Type:pdf, Size:1020Kb

Estimation Steps Point Estimators Statistics Point Estimates IntroductiontoEstimation POINT&INTERVALESTIMATION Basicdefinitionsandconcepts AND Theassignmentofvalue(s)toapopulationparameter (INTRODUCTIONTOTESTING) basedonavalueofthecorrespondingsamplestatisticis calledestimation. The value(s) assignedto a population parameter based on thevalueofasamplestatisticiscalledanestimate. Thesamplestatisticusedtoestimateapopulation parameteriscalledanestimator. 2 Estimationsteps PointEstimators Theestimationprocedureinvolvesthefollowingsteps: APointEstimation 1. Select a sample. The value of a samplestatistic that isused to estimate a populationparameteriscalledapointestimate.Usually, 2. Collecttherequiredinformationfromthemembersof wheneverweusepointestimation,wecalculatethemarginof the sample. errorassociated with that point estimation, which is s 3. Calculatethevalueofthesamplestatistic. calculatedasfollows: Margin of error 1.96 x or 1.96 x 4. Assigg()pgppnvalue(s)tothecorrespondingpopulation Pointestimateisbasedonjustonesample,wecannot parameter. expectittobeequaltothecorrespondingpopulation parameter.Indeed,eachsamplewillhaveadifferent, non of them isequal to. But they are all unbiased estimatesof.(Recallthatunbiased meanstheir expectedvalueisequalto.) 3 4 Statistics PointEstimates Statistics Parameters Z Astatisticisapropertyofasamplefromthe Z Instatisticalinference,thetermparameter isusedtodenote population. aquantity,say,thatisapropertyofanunknownprobability Z Astatisticisdefinedtobeanyfunctionofrandom distribution. variables.So,itisalsoarandomvariable.For Z Forexampp,le,themean, variance, orap articularquantile of example,thesamplemean,samplevariance,ora theprobabilitydistribution particularsamplequantile. Z Parametersareunknown,andoneofthegoalsofstatistical Z The observedvalue of the statisticcan be ccalculatedalculated inferenceis to estimate them . fromtheobserveddatavaluesofrandomvariables. Estimation Examples of statistics: Z Aprocedureof“guessing”propertiesofthepopulationfrom whichdata are collected. X XX sample mean X 12 n Z Apointestimateofanunknownparameterisastatisticthat n representsa“guess”oftheparameterofinterest. n 2 Z B ()XXi There maybe more than one sensiblepoint estimate of a sample variance S 2 i1 n parameter. 5 1 6 Therelationshipbetweenanunknown PropertiesofEstimatorsthatWeDesire parameteranditspointestimator UbiUnbiase dness: E( ˆ ) Inotherwordswewouldwishthattheexpectedvalueof theestimatoristhesameasitstruevalue. Wedefinebiasofanestimator asthedifferencebetween theexpectedvalueoftheestimatorandthetruevaluein thepppopulation: Efficiency:wewishtominimizethemeansquareerror aroundthetruevalue.Theefficiencytellsushowwellthe estimatorperformsinpredicting.Amongunbiased estimatorstherefore,wewanttheonewiththesmallest variance. Consistency.Assamplesizeincreases,variationofthe estimatorfromthetruepopulationvaluedecreases. 7 8 Unbiasedness Efficiency P(X) Sampling Unbiased Biased P(X) Distribution of Median Sampling Distribution of Mean 9 10 Consistency Intervalestimation: Larger General approach P(X) sample size B Smaller sample size A 11 12 IntervalEstimation ConfidenceIntervalEstimation Definition Outline: Ininterval estimation,aninterval is constructed around Procedure: thepointestimate,anditisstatedthatthisintervalislikely 1. Sample point estimator ( X or p ) tocontainthecorrespondingpopulationparameter. 2. Confidence level and Table Zortn-1 3. Formulas compute UCL and LCL: point estimator margin of error x x $1370 13 $1130 $1610 IntervalEstimationofthePopulationMean Eachintervalisconstructedwithregardtoagivenconfidencelevel Intervalestimationofapopulation andiscalledaconfidenceinterval. The confidence levelassociated with a confidence intervalstates how mean: The case of known5 muchconfidencewehavethatthisintervalcontainsthetrue populationparameter.Theconfidencelevelisdenotedby(1– ))%100%. The(1– )100%confidenceintervalfor- (populationmean)is: x z x if is known and x zsx if is not known, where x / n and sx s/ n Thevalueofz usedherecanbefoundfromthestandardnormal distributiontable,forthegivenconfidencelevel. Themaximumerrorofestimatefor),denotedbyE,isthequantity thatis subtracted from and added to the value ofx to obtain a confidenceintervalfor).Thus, E z x or zsx 16 17 5 IntervalEstimationofthePopulationMean IntervalEstimationofthePopulationMeanwhen is when5 is known:Example known:AnswerstotheExample x Apublishingcompanyhasjustpublishedanewcollegetextbook. Herewetakeadvantageofourknowledgeondistributionofto Beforethecomppyanydecidesthe priceatwhichtosellthis developaconfidenceintervalfor. 4.50 textbook,itwantstoknowtheaveragepriceofallsuch a) n=36,x =$70.50,and =$4.5,thus: $.75 x n textbooksinthemarket.Theresearchdepartmentatthe Pointestimateof) =x =$70.50 36 company tooka sample of 36 comparable textbooksand Marginof error= 1.96 x 1.96(.75) $1.47 collectedinformationontheirprices.Thisinformationproduces b) Confidencelevelis90%or.90;andz =1.65. ameanpriceof$70.50forthissample.Itisknownthatthe x z standddiidarddeviationo fhfthepr iceso fllhfallsuchtext boo ki$450ksis$4.50. x 70.50 1.65(.75) 70.50 1.24 (a) Whatisthepointestimateofthemeanpriceofallsuch (70.50 -1.24) to (70.50 1.24) textbooks? Whatis the margin of errorfor the estimate? $69.26 to $71.74 (b) Constructa90%confidenceintervalforthemeanpriceofall Basedonourresults,wecansaythatweare90%confidentthat suchcollegetextbooks. themean price of all such college textbooks is between $69.26and$71.74. 18 19 Example 1: 1. Interval Estimation for Population Mean Answer: Example 1: ( known case) • n = 49 In an effort to estimate the mean amount spent per customer for dinner at a major Atlanta restaurant. Data were collected X $24.8 for a sample of 49 customers over a three-week period. Assume a population standard deviation of $5. =$5 a. At the 95% confidence, what is the margin error? • Z: (1- )/2 = 0.95/2 = 0.475 Table 1: Z = 1.96 5 •1. Z x (1.96) 1.4 b. If the sample mean is $24.80, What is the 95% / 2 n 49 confidence interval for the population mean? 2. UCL X Z x 24 .8 1.4 26 .2 / 2 n LCL X Z x 24 .8 1.4 23 .4 / 2 n : [23.4, 26.2] IntervalEstimationofthePopulationMean when5 isunknown Intervalestimationofapopulation Insteadofpopulationstandarddeviation wehave mean: The case of unknown5 samplestandard deviation s. Insteadofnormaldistribution,wehavetdistribution Thetdistribution isusedtoconstructaconfidence intervalabout if: 1. Thepopul ati on fromwhi c h thesamp le is drawn is (approximately)normallydistributed; 2. Thesampp(,);lesizeissmall(thatis,n<30); 3. Thepopulationstandarddeviation,,isnotknown. 22 23 t The Distribution Thet Distribution:Example Thetdistribution isaspecifictypeofbellshapeddistributionwith alowerheightandawiderspreadthanthestandardnormal Findthevalueoft for16degreesoffreedomand.05areain distribution.Asthesamppg,lesizebecomeslarger,thet t distributionapproachesthestandardnormaldistribution.A ther ig ht ta ilo fa dis tr ibu tioncurve. specifictdistributiondependsononlyoneparameter,calledthe Area in the Right Tail Under the t Distribution Curve degreesof freedom (df). The mean of the t distributionis equal df .10 .05 .025 ….001 df df to0anditsstandarddeviationisfoundby.The/( 2) 1 3.078 6.314 12.706 … 318.309 graphbelowdepictsthecaseofdf=3. 2 1.886 2.920 4.303 … 22.327 3 1.638 2.353 3.182 … 10.215 Thestandarddeviationofthe Thestandarddeviationofthet . … … … … … standardnormaldistributionis1.0 distributionis 9 /(9 2) 1.134 16 1. 337 1. 746 2. 120 … 3. 686 . … … … … … Therequiredvalueoft for16 df and.05areaintherighttail. 24 25 - = 0 ConfidenceIntervalforPopulationmean Using t (continued) The Distribution thet Distribution Thet distributionwith16degreesoffreedom,areasunderthe rightand the left tails . The (1– )100%confidenceinterval for) is s x ts where s x x n Thevalueoft isobtainedfromthet distributiontableforn – 1dffddhfdlldegreesoffreedomandthegivenconfidencelevel. .05 1.746 26 -1.746 0 27 ConfidenceIntervalforPopulationmean Using ConfidenceIntervalforPopulationmean Using thet Distribution:Example thet Distribution:ExampleAnswered Dr.Moorewantedtoestimatethemeancholesterollevelfor Confidencelevelis95%or.95,withdf =n – 1=25– 1=24 Areaineachtail=.5– (.95/2)=.5R .4750=.025 alladult menliving in Hartford. He took a sampleof 25 adult s Thevalueoft intherighttailis2.064,and s 12 menfromHartfordandfoundthatthemeancholesterol x 2.40 n 25 levelforthissampleis186withastandarddeviationof12. Assumethatt hec hlholestero lllleve lflsfora lldllladultmenin df Hartfordare(approximately)normallydistributed.Construct = 24 a 95%confidence interval forthe population mean). .025 .025 .4750 .4750 x tsx 186 2.064(2.40) 186 4.95 181.05 to 190.95 Thus,wecanstatewith95%confidencethatthemean cho les tero l leve l fora lla du ltmen liv ing in Har for d lies be tween 181.05and190.95. 28 29 Example 2: Example 2: ( known case) Given: n = 100,,,, X = 49, S = 8.5, 1- = .95 The mean flying time for pilots at Continental Think: What to estimate? Use Z or t? Airlines is 49 hours per month. This mean was Answer: based on a sample of 100 pilots and the sample • Sample info (given): n = 100,X = 49, S = 8.5 standard deviation was 8.5 hours. • t: 1- =0.95, so /2=0.025, d.f.=n-1=99 Table 2: dfd.f.=100, /2=0. 025 t=1. 984 a. At 95% confidence, what is the margin of error? d.f.=80, /2=0.025 t=1.990 100 99 b. What is the 95% confidence interval estimate of *Interpolation: t 1.984 (1.990 1.984) 1.9843 the population mean flying time? 100 80 S 8.5 c. The mean flying time for pilots at United Airlines a. m.o.e.: m.o.e. t 1.9843 1.69 / 2
Recommended publications
  • Multiple-Change-Point Detection for Auto-Regressive Conditional Heteroscedastic Processes
    J. R. Statist. Soc. B (2014) Multiple-change-point detection for auto-regressive conditional heteroscedastic processes P.Fryzlewicz London School of Economics and Political Science, UK and S. Subba Rao Texas A&M University, College Station, USA [Received March 2010. Final revision August 2013] Summary.The emergence of the recent financial crisis, during which markets frequently under- went changes in their statistical structure over a short period of time, illustrates the importance of non-stationary modelling in financial time series. Motivated by this observation, we propose a fast, well performing and theoretically tractable method for detecting multiple change points in the structure of an auto-regressive conditional heteroscedastic model for financial returns with piecewise constant parameter values. Our method, termed BASTA (binary segmentation for transformed auto-regressive conditional heteroscedasticity), proceeds in two stages: process transformation and binary segmentation. The process transformation decorrelates the original process and lightens its tails; the binary segmentation consistently estimates the change points. We propose and justify two particular transformations and use simulation to fine-tune their parameters as well as the threshold parameter for the binary segmentation stage. A compara- tive simulation study illustrates good performance in comparison with the state of the art, and the analysis of the Financial Times Stock Exchange FTSE 100 index reveals an interesting correspondence between the estimated change points
    [Show full text]
  • Chapter 6 Point Estimation
    Chapter 6 Point estimation This chapter deals with one of the central problems of statistics, that of using a sample of data to estimate the parameters for a hypothesized population model from which the data are assumed to arise. There are many approaches to estimation, and several of these are mentioned; but one method has very wide acceptance, the method of maximum likelihood. So far in the course, a major distinction has been drawn between sample quan- tities on the one hand-values calculated from data, such as the sample mean and the sample standard deviation-and corresponding population quantities on the other. The latter arise when a statistical model is assumed to be an ad- equate representation of the underlying variation in the population. Usually the model family is specified (binomial, Poisson, normal, . ) but the index- ing parameters (the binomial probability p, the Poisson mean p, the normal variance u2, . ) might be unknown-indeed, they usually will be unknown. Often one of the main reasons for collecting data is to estimate, from a sample, the value of the model parameter or parameters. Here are two examples. Example 6.1 Counts of females in queues In a paper about graphical methods for testing model quality, an experiment Jinkinson, R.A. and Slater, M. is described to count the number of females in each of 100 queues, all of length (1981) Critical discussion of a ten, at a London underground train station. graphical method for identifying discrete distributions. The The number of females in each queue could theoretically take any value be- Statistician, 30, 239-248.
    [Show full text]
  • THE THEORY of POINT ESTIMATION a Point Estimator Uses the Information Available in a Sample to Obtain a Single Number That Estimates a Population Parameter
    THE THEORY OF POINT ESTIMATION A point estimator uses the information available in a sample to obtain a single number that estimates a population parameter. There can be a variety of estimators of the same parameter that derive from different principles of estimation, and it is necessary to choose amongst them. Therefore, criteria are required that will indicate which are the acceptable estimators and which of these is the best in given circumstances. Often, the choice of an estimate is governed by practical considerations such as the ease of computation or the ready availability of a computer program. In what follows, we shall ignore such considerations in order to concentrate on the criteria that, ideally, we would wish to adopt. The circumstances that affect the choice are more complicated than one might imagine at first. Consider an estimator θˆn based on a sample of size n that purports to estimate a population parameter θ, and let θ˜n be any other estimator based on the same sample. If P (θ − c1 ≤ θˆn ≤ θ + c2) ≥ P (θ − c1 ≤ θ˜n ≤ θ + c2) for all values of c1,c2 > 0, then θˆn would be unequivocally the best estimator. However, it is possible to show that an estimator has such a property only in very restricted circumstances. Instead, we must evaluate an estimator against a list of partial criteria, of which we shall itemise the principal ones. The first is the criterion of unbiasedness. (a) θˆ is an unbiased estimator of the population parameter θ if E(θˆ)=θ. On it own, this is an insufficient criterion.
    [Show full text]
  • Point Estimation Decision Theory
    Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we select a likelihood function p(D j θ), explaining how observed data D are expected to be generated given the value of θ. Then we select a prior distribution p(θ) reecting our initial beliefs about θ. Finally, we conduct an experiment to gather data and use Bayes’ theorem to derive the posterior p(θ j D). In a sense, the posterior contains all information about θ that we care about. However, the process of inference will often require us to use this posterior to answer various questions. For example, we might be compelled to choose a single value θ^ to serve as a point estimate of θ. To a Bayesian, the selection of θ^ is a decision, and in dierent contexts we might want to select dierent values to report. In general, we should not expect to be able to select the true value of θ, unless we have somehow observed data that unambiguously determine it. Instead, we can only hope to select an estimate that is “close” to the true value. Dierent denitions of “closeness” can naturally lead to dierent estimates. The Bayesian approach to point estimation will be to analyze the impact of our choice in terms of a loss function, which describes how “bad” dierent types of mistakes can be. We then select the estimate which appears to be the least “bad” according to our current beliefs about θ.
    [Show full text]
  • Chapter 7 Point Estimation Method of Moments
    Introduction Method of Moments Procedure Mark and Recapture Monsoon Rains Chapter 7 Point Estimation Method of Moments 1 / 23 Introduction Method of Moments Procedure Mark and Recapture Monsoon Rains Outline Introduction Classical Statistics Method of Moments Procedure Mark and Recapture Monsoon Rains 2 / 23 Introduction Method of Moments Procedure Mark and Recapture Monsoon Rains Parameter Estimation For parameter estimation, we consider X = (X1;:::; Xn), independent random variables chosen according to one of a family of probabilities Pθ where θ is element from the parameter spaceΘ. Based on our analysis, we choose an estimator θ^(X ). If the data x takes on the values x1; x2;:::; xn, then θ^(x1; x2;:::; xn) is called the estimate of θ. Thus we have three closely related objects. 1. θ - the parameter, an element of the parameter space, is a number or a vector. 2. θ^(x1; x2;:::; xn) - the estimate, is a number or a vector obtained by evaluating the estimator on the data x = (x1; x2;:::; xn). 3. θ^(X1;:::; Xn) - the estimator, is a random variable. We will analyze the distribution of this random variable to decide how well it performs in estimating θ. 3 / 23 Introduction Method of Moments Procedure Mark and Recapture Monsoon Rains Classical Statistics In classical statistics, the state of nature is assumed to be fixed, but unknown to us. Thus, one goal of estimation is to determine which of the Pθ is the source of the data. The estimate is a statistic θ^ : data ! Θ: For estimation procedures, the classical approach to statistics is based
    [Show full text]
  • Point Estimation
    6.1 Point Estimation What is an estimate? • Want to study a population • Ideally the knowledge of the distribution, • Some parameters of the population may be the first consideration: θ • Use the information from a sample to estimate • Estimator θˆ : defines the procedure (the formula) • Different estimators, each is a rv. Examples of estimators • want to estimate the mean µ of a normal dist • Possibilities of estimators: X sample mean X¯ = i • n ! • sample median • average of the smallest and the largest • 10% trimmed mean Which one is best (better)? • Error in estimation θˆ = θ + error of estimation • Error is random, depending on the sample • We would like to control the error • mean 0 (unbiased) • smallest variance Unbiased Estimator • estimator: a particular statistic • different estimators: some tend to over (under) estimate • unbiased: E(θˆ) = θ • difference is called the bias • examples: sample mean (yes), sample proportion (yes) if X is a binomial rv. • If possible, we should always choose an unbiased Sample Variance • Need an estimator for the variance • Natural candidate: sample variance • Why divide by n-1, not n? • Make sure the estimator is unbiased • We can not say the same thing for sample standard deviation Unbiased Estimators for Population Mean • several choices (make things complicated) • sample mean • if distribution is continuous and symmetric, sample median and any trimmed mean Minimum Variance • suppose we have two estimators, both unbiased, which one do we prefer? • pick the one with smaller variance • minimum variance unbiased estimator (MVUE) • for normal distribution, the sample mean X ¯ is the MVUE for the population mean.
    [Show full text]
  • Point Estimators Math 218, Mathematical Statistics
    that has \central tendencies." Which of these is best, and why? Desirable criteria for point estimators. The MSE. Well, of course, we want our point estima- ^ Point estimators tor θ of the parameter θ to be close to θ. We can't Math 218, Mathematical Statistics expect them to be equal, of course, because of sam- D Joyce, Spring 2016 pling error. How should we measure how far off the random variable θ^ is from the unknown constant θ? The main job of statistics is to make inferences, One standard measure is what is called the mean specifically inferences bout parameters based on squared error, abbreviated MSE and defined by data from a sample. MSE(θ^) = E((θ^ − θ)2) We assume that a sample X1;:::;Xn comes from a particular distribution, called the population dis- the expected square of the error. If we have two dif- tribution, and although that particular distribution ferent estimators of θ, the one that has the smaller is not known, it is assumed that it is one of a fam- MSE is closer to the actual parameter (in some ily of distributions parametrized by one or more sense of \closer"). parameters. For example, if there are only two possible out- Variance and bias. The MSE of an estimator θ^ comes, then the distribution is a Bernoulli distri- can be split into two parts, the estimator's variance bution parametrized by one parameter p, the prob- and a \bias" term. We're familiar with the variance ability of success. of a random variable X; it's For another example, many measurements are as- sumed to be normally distributed, and for a normal Var(X) = σ2 = E((X − µ )2) = E(X2) − µ2 : distribution, there are two parameters µ and σ2.
    [Show full text]
  • Lecture 14 – Statistical Inference, Point Estimation, Hypothesis Testing
    Lecture 14 – Statistical Inference, Point Estimation, Hypothesis Testing Statistical inference is the process of using the information contained in random samples from a population to make inferences about the population. This can be further divided into parameter estimation and hypothesis testing. Parameter Estimation The different kinds of population parameters one might be interested in are population mean, population variance, population proportion, difference between two population means, and difference between two population proportions. The last two are useful to compare two populations. The corresponding sample statistics for each of the above population parameters are sample mean, sample variance, sample proportion, difference between two samples means, and difference between two sample proportions, respectively. Each of these statistics has estimates (values) that can be gotten from the random sample collected from the population. Inferences on the mean of a population, variance known Let’s say, we are interested in questions about the population mean. Recall for large samples using central limit theorem, 휎2 푋̅ ~ 푁 (휇, ) 푛 Let us first find a μl such that, 훼 푃(푋̅ < 휇 ) = 푙 2 This is the same as, 휇푙 − 휇 훼 푃 (푍 < 휎 ) = ⁄ 2 √푛 And, 휇푙 − 휇 휎 = 푍훼 ⁄ 2 √푛 휎 휇푙 = 휇 + 푍훼 ⁄ 2 √푛 Similarly, we can find a μu such that, 훼 푃(푋̅ < 휇 ) = 1 − 푢 2 훼 휎 휇푙 = 휇 + 푍1− ⁄ 2 √푛 Now the probability that 푋̅ lies between μl and μu is given by 1-α. Recognizing that 훼 훼 푍1− = −푍 , if we ignore the sign for the Z values, we have, 2 2 휎 휎 푃 (휇 − 푍훼 < 푋̅ < 휇 + 푍훼 ) = 1 − 훼 2 √푛 2 √푛 Rearranging this to find an interval for μ we have, 휎 휎 푃 (푋̅ − 푍훼 < 휇 < 푋̅ + 푍훼 ) = 1 − 훼 2 √푛 2 √푛 Recognize that μ actually is a number and not a variable, the correct way to express the above is, 휎 휇 ∈ (푋̅ ± 푍훼 ) 푤푡ℎ 푎 (1 − 훼) 푐표푛푓푑푒푛푐푒 2 √푛 This is a 2 sided confidence interval.
    [Show full text]
  • Point Estimation Statistics (OA3102)
    Module 4: Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.1-8.4 Revision: 1-12 1 Goals for this Module • Define and distinguish between point estimates vs. point estimators • Discuss characteristics of good point estimates – Unbiasedness and minimum variance – Mean square error – Consistency, efficiency, robustness • Quantify and calculate the precision of an estimator via the standard error – Discuss the Bootstrap as a way to empirically estimate standard errors Revision: 1-12 2 Welcome to Statistical Inference! • Problem: – We have a simple random sample of data – We want to use it to estimate a population quantity (usually a parameter of a distribution) – In point estimation, the estimate is a number • Issue: Often lots of possible estimates – E.g., estimate E(X) with x , x , or ??? • This module: What’s a “good” point estimate? – Module 5: Interval estimators – Module 6: Methods for finding good estimators Revision: 1-12 3 Point Estimation • A point estimate of a parameter q is a single number that is a sensible value for q – I.e., it’s a numerical estimate of q – We’ll use q to represent a generic parameter – it could be m, s, p, etc. • The point estimate is a statistic calculated from a sample of data x – The statistic is called a point estimator – Using “hat” notation, we will denote it as qˆ – For example, we might use to estimate m, so in this case mˆ x Revision: 1-12 4 Definition: Estimator An estimator is a rule, often expressed as
    [Show full text]
  • Outline Estimation Point Estimation
    1/22 Outline Basic Econometrics in Transportation PrProblemoblem ooff EEstimationstimation Point Interval Estimation Methods Statistical Inference Statistical Properties Small sample Large sample Amir Samimi Hypothesis Testing Civil Engineering Department Sharif University of Technology Primary Sources: Basic Econometrics, by Gujarati and lecture notes of Professor Greene 2/22 3/22 Estimation Point Estimation The popu l atio n has c ha racte rist ics: For s im pli ci ty, assum e th at th er e i s onl y on e unkn own Mean, variance parameter (θ) in the PDF. Median Develop a function of the sample values such that: Percentiles is a statistic, or an estimator and is a random variable. Get a slice (random sample) of the population. A particular numerical value of the estimator is known as an Hldbihlfi?How would you obtain the value of its parameters? estimate. This is known as the problem of estimation: is known as a point estimator because it provides a single Point estimation (point) estimate of θ. Interval estimation 4/22 5/22 Example Interval Estimation LLetet An inintervalterval betweebetweenn two vavalueslues tthathat mmayay inincludeclude tthehe ttruerue θ. The key concept is the of probability distribution of an estimator. is the sample mean, an estimator of the true mean value (μ). If in a specific case = 50, this provides an estimate of μ. Will the interval contain the true value? The interval is a “Confidence Interval” The degree of certainty is the degree of confidence or level of significance. 6/22 7/22 Example Estimation Methods Suppose t hat t he d ist ribut io n o f he ig ht o f me n in a popu lat io n Least Squar es (L S) is normally distributed with mean = μ inches and σ = 2.5 A considerable time will be devoted to illustrate the LS method.
    [Show full text]
  • Inferential Stats 1: Point Estimation Copyright © 2000, 2011, 2016, J
    Inferential Stats 1: Point Estimation Copyright © 2000, 2011, 2016, J. Toby Mordkoff The purpose of an inferential statistic is to make a statement about the population from which a sample was taken. Assume, for example, that you are interested in the mean level of anxiety suffered by graduate students. You could, I suppose, measure the anxiety level of every grad from every department at every university and thereby learn the true value, but this is impractical. So, instead, you take a sample of grads, measure only their anxiety levels, and then make an inference (i.e., a best guess) about the entire population. Clearly the quality of this inferential process depends on (at least) two things: how well the sample represents the target group, and how well one can calculate an estimate based on a sample. Here we will focus on the second part. The question of external validity -- which is the “how well the sample represents the target group” part -- is not really a statistical issue. In general, all inferences start with a “best estimate” of the population characteristic of interest. In most cases, the formula for the best guess of something is the same as the formula for the same characteristic of a sample. For example, the best guess for the mean of the population, μ (mu), uses the same formula as that for the mean of a sample, -- i.e., ΣX/N. In other words, the best guess for the population mean is the sample mean. But this is not always the case; there are attributes of populations that are best estimated by something other than the value for the sample.
    [Show full text]
  • 10 Point Estimation Using Maximum Likelihood
    B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 10 Point Estimation using Maximum Likelihood 10.1 The method of maximum likelihood De¯nition For each x 2 X , let θ^(x) be such that, with x held ¯xed, the likelihood function fL(θ; x): θ 2 £g attains its maximum value as a function of θ at θ^(x). The estimator θ^(X) is then said to be a maximum likelihood estimator (MLE) of θ. ² Given any observed value x, the value of a MLE, θ^(x), can be interpreted as the most likely value of θ to have given rise to the observed x value. ² To ¯nd a MLE, it is often more convenient to maximize the log likelihood function, ln L(θ; x), which is equivalent to maximizing the likelihood function. ² It should be noted that a MLE may not exist | there may be an x 2 X such that there is no θ that maximizes the likelihood function fL(θ; x): θ 2 £g. However, in practice, it has been found that such cases will be exceptional. ² The MLE, if it does exist, may not be unique | given x 2 X , there may be more than one value of θ that maximizes the likelihood function fL(θ; x): θ 2 £g. Again, in practice, such cases are found only very exceptionally. ² There will be simple cases in which the MLE may be found analytically, possibly just using the standard methods of calculus. In other cases, especially if the dimension q of the vector of parameters is large, the maximization will have to be carried out using numerical methods within computer packages.
    [Show full text]