MAXIMUM LIKELIHOOD ESTIMATION for the Efficiency of Mles, Namely That the Theorem 1

Total Page:16

File Type:pdf, Size:1020Kb

MAXIMUM LIKELIHOOD ESTIMATION for the Efficiency of Mles, Namely That the Theorem 1 MAXIMUM LIKELIHOOD It should be pointed out that MLEs do ESTIMATION not always exist, as illustrated in the follow- ing natural mixture example; see Kiefer and Maximum likelihood is by far the most pop- Wolfowitz [32]. ular general method of estimation. Its wide- spread acceptance is seen on the one hand in Example 2. Let X1, ..., Xn be independent the very large body of research dealing with and identically distributed (i.i.d.) with den- its theoretical properties, and on the other in sity the almost unlimited list of applications. 2 To give a reasonably general definition p 1 x − µ f (x|µ, ν, σ, τ, p) = √ exp − of maximum likelihood estimates, let X = 2πσ 2 σ (X1, ..., Xn) be a random vector of observa- − − 2 tions whose joint distribution is described + √1 p − 1 x ν | exp , by a density fn(x )overthen-dimensional 2πτ 2 τ Euclidean space Rn. The unknown parameter vector is contained in the parameter space where 0 p 1, µ, ν ∈ R,andσ, τ>0. ⊂ s ∗ R . For fixed x define the likelihood The likelihood function of the observed = = | function of x as L() Lx() fn(x )con- sample x , ...x , although finite for any per- ∈ 1 n sidered as a function of . missible choice of the five parameters, ap- proaches infinity as, for example, µ = x , p > ˆ = ˆ ∈ 1 Definition 1. Any (x) which 0andσ → 0. Thus the MLEs of the five maximizes L()over is called a maximum unknown parameters do not exist. likelihood estimate (MLE) of the unknown Further, if an MLE exists, it is not neces- true parameter . sarily unique as is illustrated in the following Often it is computationally advantageous example. to derive MLEs by maximizing log L()in place of L(). Example 3. Let X1, ..., Xn be i.i.d. with den- | = 1 −| − | sity f (x α) 2 exp( x α ). Maximizing fn Example 1. Let X be the number of suc- | (x1, ..., xn α) is equivalent to minimizing cesses in n independent Bernoulli trials with |xi − α| over α.Forn = 2m one finds that success probability p ∈ [0, 1]; then anyα ˆ ∈ [x(m), x(m+1)] serves as MLE of α, where x(i) is the ith order statistic of the Lx(p) = f (x|p) = P(X = x|p) sample. n − = px(1 − p)n x x = 0, 1, ..., n. x The method of maximum likelihood estima- tion is generally credited to Fisher∗ [17–20], although its roots date back as far as Lam- Solving bert∗, Daniel Bernoulli∗,andLagrangeinthe ∂ eighteenth century; see Edwards [12] for an log L (p) = x/p − (n − x)/(1 − p) = 0 historical account. Fisher introduces the me- ∂p x thod in [17] as an alternative to the method of moments∗ and the method of least squares∗. for p, one finds that log Lx(p) and hence Lx(p) The former method Fisher criticizes for its has a maximum at arbitrariness in the choice of moment equa- tions and the latter for not being invari- pˆ = pˆ(x) = x/n. ant under scale changes in the variables. The term likelihood∗ as distinguished from This example illustrates the considerable in- (inverse) probability appears for the first time tuitive appeal of the MLE as that value of in [18]. Introducing the measure of informa- p for which the probability of the observed tion named after him (see FISHER INFORMA- value x is the largest. TION) Fisher [18–20] offers several proofs Encyclopedia of Statistical Sciences, Copyright © 2006 John Wiley & Sons, Inc. 1 2 MAXIMUM LIKELIHOOD ESTIMATION for the efficiency of MLEs, namely that the Theorem 1. Under A0 and A1 asymptotic variance of asymptotically normal | | → estimates cannot fall below the reciprocal P [fn(X ) > fn(X )] 1 of the information contained in the sample and, furthermore, that the MLE achieves this as n →∞for any , ∈ with = .If, ˆ lower bound. Fisher’s proofs, obscured by the in addition, is finite, then the MLE n fact that assumptions are not always clearly exists and is consistent. stated, cannot be considered completely rig- orous by today’s standards and should be The content of Theorem 1 is a corner- understood in the context of his time. To some stone in Wald’s [56] consistency proof of the extent his work on maximum likelihood esti- MLE for the general case. Wald assumes mation was anticipated by Edgeworth [11], that is compact, which by a familiar com- whose contributions are discussed by Sav- pactness argument reduces the problem to age [51] and Pratt [45]. However, it was Fis- thecaseinwhich contains only finitely her’s insight and advocacy that led to the many elements. Aside from the compactness prominence of maximum likelihood estima- assumption on , which often is not satisfied tion as we know it today. in practice, Wald’s uniform integrability con- For a discussion and an extension of Defi- ditions (imposed on log f (·|)) often are not nition 1 to richer (nonparametric) statistical satisfied in typical examples. models which preclude a model description Many improvements in Wald’s approach through densities (i.e., likelihoods will be toward MLE consistency were made by later missing), see Scholz [52]. At times the pri- researchers. For a discussion and further ref- mary concern is the estimation of some func- erences, see Perlman [42]. Instead of Wald’s tion g of . It is then customary to treat g(ˆ ) theorem or any of its refinements, we present as an ‘‘MLE’’ of g(), although strictly speak- another theorem, due to Rao [47], which ing, Definition 1 only justifies this when g is a shows under what simple conditions MLE one-to-one function. For arguments toward a consistency may be established in a certain general justification of g(ˆ )asMLEofg(), specific situation. see Zehna [58] and Berk [7]. Theorem 2. Let A0 and A1 be satisfied and let f (·|) describe a multinomial experiment CONSISTENCY with cell probabilities π() = (π1(), ..., πk()). If the map → π(), ∈ ,has Much of maximum likelihood theory deals a continuous inverse (the inverse existing with the large sample (asymptotic) proper- because of A1), then the MLE ˆ n, if it exists, ties of MLEs; i.e., with the case in which is a consistent estimator of . it is assumed that X1, ..., Xn are indepen- dent and identically distributed with density For a counterexample to Theorem 2 when the f (·|)(i.e.,X , ..., X i.i.d. ∼ f (·|)). The joint 1 n inverse continuity assumption is not satisfied density of X = (X , ..., X )isthenf (x|) = 1 n n see Kraft and LeCam [33]. n f (x |). It further is assumed that the i=1 i A completely different approach toward distributions P corresponding to f (·|)are θ proving consistency of MLEs was given by identifiable, i.e., = ,and, ∈ im- Cramer´ [9]. His proof is based on a Taylor plies P = P . For future reference we state expansion of log L() and thus, in contrast the following assumptions: to Wald’s proof, assumes a certain amount of smoothness in f (·|) as a function of . ·| ∈ A0: X1, ..., Xn i.i.d. f ( ) ; Cramer´ gave the consistency proof only for A1: the distributions P, ∈ ,areiden- ⊂ R. Presented here are his conditions gen- tifiable. eralized to the multiparameter case, ⊂ Rs: The following simple result further supports C1: The distributions P have common the intuitive appeal of the MLE; see Bahadur support for all ∈ ; i.e., {x : f (x|) > [3]: 0} does not change with ∈ . MAXIMUM LIKELIHOOD ESTIMATION 3 C2: There exists an open subset ω of For a proof see Lehmann [37, Sect. 6.4]. The containing the true parameter point theorem needs several comments for clarifi- 0 such that for almost all x the den- cation: sity f (x|) admits all third derivatives (a) If the likelihood function L() attains ∂3 its maximum at an interior point of f (x|)forall ∈ ω. ∂ i∂ j∂ k then the MLE is a solution to the like- lihood equation. If in addition the like- C3: lihood equations only have one root, then Theorem 3 proves the consistency ∂ ˜ E log f (X|) = 0, j = 1, ..., s of the MLE(ˆ n = n). ∂ j (b) Theorem 3 does not state how to iden- and tify the consistent root among possibly ∂ many roots of the likelihood equations. Ijk():= E log f (X|) ˜ ∂j One could take the root n which is closest to ,butthen˜ is no ∂ 0 n × log f (X|) longer an estimator since its construc- ∂k tion assumes knowledge of the un- 2 known value of .Thisproblemmay =− ∂ | 0 E log f (X ) be overcome by taking that root which ∂ j∂ k is closest to a (known) consistent esti- exist and are finite for j, k = 1, ..., s mator of 0. The utility of this ap- and all ∈ ω. proach becomes clear in the section on C4: The Fisher information matrix I() = efficiency. (Ijk())j,k=1,...,s is positive definite for (c) The MLE does not necessarily coincide all ∈ ω. with the consistent root guaranteed by Theorem 3. Kraft and LeCam [33] give C5: There exist functions Mijk(x) indepen- dent of such that for all i, j, k, = an example in which Cramer’s´ con- 1, ..., s, ditions are satisfied, the MLE exists, is unique, and satisfies the likelihood 3 ∂ equations, yet is not consistent. log f (x|) Mijk(x) ∂θ ∂θ ∂θ i j k In view of these comments, it is advantageous for all ∈ ω, to establish the uniqueness of the likeli- hood equation roots whenever possible. For where example, if f (x|)isofnondegeneratemul- tiparameter exponential family type, then ∞ E0 (Mijk(X)) < .
Recommended publications
  • 4.2 Variance and Covariance
    4.2 Variance and Covariance The most important measure of variability of a random variable X is obtained by letting g(X) = (X − µ)2 then E[g(X)] gives a measure of the variability of the distribution of X. 1 Definition 4.3 Let X be a random variable with probability distribution f(x) and mean µ. The variance of X is σ 2 = E[(X − µ)2 ] = ∑(x − µ)2 f (x) x if X is discrete, and ∞ σ 2 = E[(X − µ)2 ] = ∫ (x − µ)2 f (x)dx if X is continuous. −∞ ¾ The positive square root of the variance, σ, is called the standard deviation of X. ¾ The quantity x - µ is called the deviation of an observation, x, from its mean. 2 Note σ2≥0 When the standard deviation of a random variable is small, we expect most of the values of X to be grouped around mean. We often use standard deviations to compare two or more distributions that have the same unit measurements. 3 Example 4.8 Page97 Let the random variable X represent the number of automobiles that are used for official business purposes on any given workday. The probability distributions of X for two companies are given below: x 12 3 Company A: f(x) 0.3 0.4 0.3 x 01234 Company B: f(x) 0.2 0.1 0.3 0.3 0.1 Find the variances of X for the two companies. Solution: µ = 2 µ A = (1)(0.3) + (2)(0.4) + (3)(0.3) = 2 B 2 2 2 2 σ A = (1− 2) (0.3) + (2 − 2) (0.4) + (3− 2) (0.3) = 0.6 2 2 4 σ B > σ A 2 2 4 σ B = ∑(x − 2) f (x) =1.6 x=0 Theorem 4.2 The variance of a random variable X is σ 2 = E(X 2 ) − µ 2.
    [Show full text]
  • On the Efficiency and Consistency of Likelihood Estimation in Multivariate
    On the efficiency and consistency of likelihood estimation in multivariate conditionally heteroskedastic dynamic regression models∗ Gabriele Fiorentini Università di Firenze and RCEA, Viale Morgagni 59, I-50134 Firenze, Italy <fi[email protected]fi.it> Enrique Sentana CEMFI, Casado del Alisal 5, E-28014 Madrid, Spain <sentana@cemfi.es> Revised: October 2010 Abstract We rank the efficiency of several likelihood-based parametric and semiparametric estima- tors of conditional mean and variance parameters in multivariate dynamic models with po- tentially asymmetric and leptokurtic strong white noise innovations. We detailedly study the elliptical case, and show that Gaussian pseudo maximum likelihood estimators are inefficient except under normality. We provide consistency conditions for distributionally misspecified maximum likelihood estimators, and show that they coincide with the partial adaptivity con- ditions of semiparametric procedures. We propose Hausman tests that compare Gaussian pseudo maximum likelihood estimators with more efficient but less robust competitors. Fi- nally, we provide finite sample results through Monte Carlo simulations. Keywords: Adaptivity, Elliptical Distributions, Hausman tests, Semiparametric Esti- mators. JEL: C13, C14, C12, C51, C52 ∗We would like to thank Dante Amengual, Manuel Arellano, Nour Meddahi, Javier Mencía, Olivier Scaillet, Paolo Zaffaroni, participants at the European Meeting of the Econometric Society (Stockholm, 2003), the Sympo- sium on Economic Analysis (Seville, 2003), the CAF Conference on Multivariate Modelling in Finance and Risk Management (Sandbjerg, 2006), the Second Italian Congress on Econometrics and Empirical Economics (Rimini, 2007), as well as audiences at AUEB, Bocconi, Cass Business School, CEMFI, CREST, EUI, Florence, NYU, RCEA, Roma La Sapienza and Queen Mary for useful comments and suggestions. Of course, the usual caveat applies.
    [Show full text]
  • Sampling Student's T Distribution – Use of the Inverse Cumulative
    Sampling Student’s T distribution – use of the inverse cumulative distribution function William T. Shaw Department of Mathematics, King’s College, The Strand, London WC2R 2LS, UK With the current interest in copula methods, and fat-tailed or other non-normal distributions, it is appropriate to investigate technologies for managing marginal distributions of interest. We explore “Student’s” T distribution, survey its simulation, and present some new techniques for simulation. In particular, for a given real (not necessarily integer) value n of the number of degrees of freedom, −1 we give a pair of power series approximations for the inverse, Fn ,ofthe cumulative distribution function (CDF), Fn.Wealsogivesomesimpleandvery fast exact and iterative techniques for defining this function when n is an even −1 integer, based on the observation that for such cases the calculation of Fn amounts to the solution of a reduced-form polynomial equation of degree n − 1. We also explain the use of Cornish–Fisher expansions to define the inverse CDF as the composition of the inverse CDF for the normal case with a simple polynomial map. The methods presented are well adapted for use with copula and quasi-Monte-Carlo techniques. 1 Introduction There is much interest in many areas of financial modeling on the use of copulas to glue together marginal univariate distributions where there is no easy canonical multivariate distribution, or one wishes to have flexibility in the mechanism for combination. One of the more interesting marginal distributions is the “Student’s” T distribution. This statistical distribution was published by W. Gosset in 1908.
    [Show full text]
  • The Smoothed Median and the Bootstrap
    Biometrika (2001), 88, 2, pp. 519–534 © 2001 Biometrika Trust Printed in Great Britain The smoothed median and the bootstrap B B. M. BROWN School of Mathematics, University of South Australia, Adelaide, South Australia 5005, Australia [email protected] PETER HALL Centre for Mathematics & its Applications, Australian National University, Canberra, A.C.T . 0200, Australia [email protected] G. A. YOUNG Statistical L aboratory, University of Cambridge, Cambridge CB3 0WB, U.K. [email protected] S Even in one dimension the sample median exhibits very poor performance when used in conjunction with the bootstrap. For example, both the percentile-t bootstrap and the calibrated percentile method fail to give second-order accuracy when applied to the median. The situation is generally similar for other rank-based methods, particularly in more than one dimension. Some of these problems can be overcome by smoothing, but that usually requires explicit choice of the smoothing parameter. In the present paper we suggest a new, implicitly smoothed version of the k-variate sample median, based on a particularly smooth objective function. Our procedure preserves many features of the conventional median, such as robustness and high efficiency, in fact higher than for the conventional median, in the case of normal data. It is however substantially more amenable to application of the bootstrap. Focusing on the univariate case, we demonstrate these properties both theoretically and numerically. Some key words: Bootstrap; Calibrated percentile method; Median; Percentile-t; Rank methods; Smoothed median. 1. I Rank methods are based on simple combinatorial ideas of permutations and sign changes, which are attractive in applications far removed from the assumptions of normal linear model theory.
    [Show full text]
  • A Note on Inference in a Bivariate Normal Distribution Model Jaya
    A Note on Inference in a Bivariate Normal Distribution Model Jaya Bishwal and Edsel A. Peña Technical Report #2009-3 December 22, 2008 This material was based upon work partially supported by the National Science Foundation under Grant DMS-0635449 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Statistical and Applied Mathematical Sciences Institute PO Box 14006 Research Triangle Park, NC 27709-4006 www.samsi.info A Note on Inference in a Bivariate Normal Distribution Model Jaya Bishwal¤ Edsel A. Pena~ y December 22, 2008 Abstract Suppose observations are available on two variables Y and X and interest is on a parameter that is present in the marginal distribution of Y but not in the marginal distribution of X, and with X and Y dependent and possibly in the presence of other parameters which are nuisance. Could one gain more e±ciency in the point estimation (also, in hypothesis testing and interval estimation) about the parameter of interest by using the full data (both Y and X values) instead of just the Y values? Also, how should one measure the information provided by random observables or their distributions about the parameter of interest? We illustrate these issues using a simple bivariate normal distribution model. The ideas could have important implications in the context of multiple hypothesis testing or simultaneous estimation arising in the analysis of microarray data, or in the analysis of event time data especially those dealing with recurrent event data.
    [Show full text]
  • Chapter 23 Flexible Budgets and Standard Cost Systems
    Chapter 23 Flexible Budgets and Standard Cost Systems Review Questions 1. What is a variance? A variance is the difference between an actual amount and the budgeted amount. 2. Explain the difference between a favorable and an unfavorable variance. A variance is favorable if it increases operating income. For example, if actual revenue is greater than budgeted revenue or if actual expense is less than budgeted expense, then the variance is favorable. If the variance decreases operating income, the variance is unfavorable. For example, if actual revenue is less than budgeted revenue or if actual expense is greater than budgeted expense, the variance is unfavorable. 3. What is a static budget performance report? A static budget is a budget prepared for only one level of sales volume. A static budget performance report compares actual results to the expected results in the static budget and reports the differences (static budget variances). 4. How do flexible budgets differ from static budgets? A flexible budget is a budget prepared for various levels of sales volume within a relevant range. A static budget is prepared for only one level of sales volume—the expected number of units sold— and it doesn’t change after it is developed. 5. How is a flexible budget used? Because a flexible budget is prepared for various levels of sales volume within a relevant range, it provides the basis for preparing the flexible budget performance report and understanding the two components of the overall static budget variance (a static budget is prepared for only one level of sales volume, and a static budget performance report shows only the overall static budget variance).
    [Show full text]
  • Random Variables Generation
    Random Variables Generation Revised version of the slides based on the book Discrete-Event Simulation: a first course L.L. Leemis & S.K. Park Section(s) 6.1, 6.2, 7.1, 7.2 c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation Random Variables Generation 1/80 Introduction Monte Carlo Simulators differ from Trace Driven simulators because of the use of Random Number Generators to represent the variability that affects the behaviors of real systems. Uniformly distributed random variables are the most elementary representations that we can use in Monte Carlo simulation, but are not enough to capture the complexity of real systems. We must thus devise methods for generating instances (variates) of arbitrary random variables Properly using uniform random numbers, it is possible to obtain this result. In the sequel we will first recall some basic properties of Discrete and Continuous random variables and then we will discuss several methods to obtain their variates Discrete-Event Simulation Random Variables Generation 2/80 Basic Probability Concepts Empirical Probability, derives from performing an experiment many times n and counting the number of occurrences na of an event A The relative frequency of occurrence of event is na/n The frequency theory of probability asserts thatA the relative frequency converges as n → ∞ n Pr( )= lim a A n→∞ n Axiomatic Probability is a formal, set-theoretic approach Mathematically construct the sample space and calculate the number of events A The two are complementary! Discrete-Event Simulation Random
    [Show full text]
  • Comparison of the Estimation Efficiency of Regression
    J. Stat. Appl. Pro. Lett. 6, No. 1, 11-20 (2019) 11 Journal of Statistics Applications & Probability Letters An International Journal http://dx.doi.org/10.18576/jsapl/060102 Comparison of the Estimation Efficiency of Regression Parameters Using the Bayesian Method and the Quantile Function Ismail Hassan Abdullatif Al-Sabri1,2 1 Department of Mathematics, College of Science and Arts (Muhayil Asir), King Khalid University, KSA 2 Department of Mathematics and Computer, Faculty of Science, Ibb University, Yemen Received: 3 Jul. 2018, Revised: 2 Sep. 2018, Accepted: 12 Sep. 2018 Published online: 1 Jan. 2019 Abstract: There are several classical as well as modern methods to predict model parameters. The modern methods include the Bayesian method and the quantile function method, which are both used in this study to estimate Multiple Linear Regression (MLR) parameters. Unlike the classical methods, the Bayesian method is based on the assumption that the model parameters are variable, rather than fixed, estimating the model parameters by the integration of prior information (prior distribution) with the sample information (posterior distribution) of the phenomenon under study. By contrast, the quantile function method aims to construct the error probability function using least squares and the inverse distribution function. The study investigated the efficiency of each of them, and found that the quantile function is more efficient than the Bayesian method, as the values of Theil’s Coefficient U and least squares of both methods 2 2 came to be U = 0.052 and ∑ei = 0.011, compared to U = 0.556 and ∑ei = 1.162, respectively. Keywords: Bayesian Method, Prior Distribution, Posterior Distribution, Quantile Function, Prediction Model 1 Introduction Multiple linear regression (MLR) analysis is an important statistical method for predicting the values of a dependent variable using the values of a set of independent variables.
    [Show full text]
  • Wooldridge, Introductory Econometrics, 4Th Ed. Appendix C
    Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathemati- cal statistics A short review of the principles of mathemati- cal statistics. Econometrics is concerned with statistical inference: learning about the char- acteristics of a population from a sample of the population. The population is a well-defined group of subjects{and it is important to de- fine the population of interest. Are we trying to study the unemployment rate of all labor force participants, or only teenaged workers, or only AHANA workers? Given a population, we may define an economic model that contains parameters of interest{coefficients, or elastic- ities, which express the effects of changes in one variable upon another. Let Y be a random variable (r.v.) representing a population with probability density function (pdf) f(y; θ); with θ a scalar parameter. We assume that we know f;but do not know the value of θ: Let a random sample from the pop- ulation be (Y1; :::; YN ) ; with Yi being an inde- pendent random variable drawn from f(y; θ): We speak of Yi being iid { independently and identically distributed. We often assume that random samples are drawn from the Bernoulli distribution (for instance, that if I pick a stu- dent randomly from my class list, what is the probability that she is female? That probabil- ity is γ; where γ% of the students are female, so P (Yi = 1) = γ and P (Yi = 0) = (1 − γ): For many other applications, we will assume that samples are drawn from the Normal distribu- tion. In that case, the pdf is characterized by two parameters, µ and σ2; expressing the mean and spread of the distribution, respectively.
    [Show full text]
  • The Third Moment in Equity Portfolio Theory – Skew
    May 2020 Michael Cheung, Head of Quantitative Research & Portfolio Manager Michael Messinger, Member & Portfolio Manager Summary Richard Duff, President & Numbers are nothing more than an abstract way to describe and model natural phenomena. Portfolio Manager Applied to capital markets, historical numerical data is often utilized as a proxy to describe the Michael Sasaki, CFA, Analyst behavior of security price movement. Harry Markowitz proposed Modern Portfolio Theory (MPT) in 1952, and since then, it has been widely adopted as the standard for portfolio optimization based on expected returns and market risk. While the principles in MPT are important to portfolio Skew Portfolio Theory construction, they fail to address real-time investment challenges. In this primer, we explore the (SPT) utilizes statistical efficacy of statistical skew and its application to real-time portfolio management. What we find is skew to build a that statistical skew can be utilized to build a more dynamic portfolio with potential to capture dynamic weighted additional risk premia – Skew Portfolio Theory (SPT). We then apply SPT to propose a new portfolio with potential rebalancing process based on statistical skewness (Skew). Skew not only captures risk in terms to capture additional of standard deviation, but analyzing how skew changes over time, we can measure how risk premia. statistical trends are changing, thus facilitating active management decisions to accommodate the trends. The positive results led to the creation and management of Redwood’s Equity Skew Strategy that is implemented in real-time. In short: • Skew is a less commonly viewed statistic that captures the mean, standard deviation, and tilt of historical returns in one number.
    [Show full text]
  • A Computational View of Market Efficiency
    Quantitative Finance, Vol. 11, No. 7, July 2011, 1043–1050 A computational view of market efficiency JASMINA HASANHODZICy, ANDREW W. LOz and EMANUELE VIOLA*x yAlphaSimplex Group, LLC, One Cambridge Center, Cambridge, MA 02142, USA zMIT Sloan School of Management, 50 Memorial Drive, E52–454, Cambridge, MA 02142, USA xCollege of Computer and Information Science, Northeastern University, 440 Huntington Ave., #246WVH, Boston, MA 02115, USA (Received 25 April 2010; in final form 16 November 2010) We study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be efficient with respect to resources S (e.g., time, memory) if no strategy using resources S can make a profit. As a first step, we consider memory-m strategies whose action at time t depends only on the m previous observations at times t À m, ..., t À 1. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory m can lead to ‘market conditions’ that were not present initially, such as (1) market spikes and (2) the possibility for a strategy using memory m04m to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms. Keywords: Agent based modelling; Bound rationality; Complexity in finance; Behavioral finance 1. Introduction ‘frictionless’ markets and costless trading, then prices fully reflect all available information. Therefore, no Market efficiency—the idea that ‘‘prices fully reflect all profits can be garnered from information-based trading available information’’—is one of the most important because such profits must have already been captured.
    [Show full text]
  • Chapter 9. Properties of Point Estimators and Methods of Estimation
    Chapter 9. Properties of Point Estimators and Methods of Estimation 9.1 Introduction 9.2 Relative Efficiency 9.3 Consistency 9.4 Sufficiency 9.5 The Rao-Blackwell Theorem and Minimum- Variance Unbiased Estimation 9.6 The Method of Moments 9.7 The Method of Maximum Likelihood 1 9.1 Introduction • Estimator θ^ = θ^n = θ^(Y1;:::;Yn) for θ : a function of n random samples, Y1;:::;Yn. • Sampling distribution of θ^ : a probability distribution of θ^ • Unbiased estimator, θ^ : E(θ^) − θ = 0. • Properties of θ^ : efficiency, consistency, sufficiency • Rao-Blackwell theorem : an unbiased esti- mator with small variance is a function of a sufficient statistic • Estimation method - Minimum-Variance Unbiased Estimation - Method of Moments - Method of Maximum Likelihood 2 9.2 Relative Efficiency • We would like to have an estimator with smaller bias and smaller variance : if one can find several unbiased estimators, we want to use an estimator with smaller vari- ance. • Relative efficiency (Def 9.1) Suppose θ^1 and θ^2 are two unbi- ased estimators for θ, with variances, V (θ^1) and V (θ^2), respectively. Then relative efficiency of θ^1 relative to θ^2, denoted eff(θ^1; θ^2), is defined to be the ra- V (θ^2) tio eff(θ^1; θ^2) = V (θ^1) - eff(θ^1; θ^2) > 1 : V (θ^2) > V (θ^1), and θ^1 is relatively more efficient than θ^2. - eff(θ^1; θ^2) < 1 : V (θ^2) < V (θ^1), and θ^2 is relatively more efficient than θ^1.
    [Show full text]