Tail Index Estimation: Quantile-Driven Threshold Selection

Total Page:16

File Type:pdf, Size:1020Kb

Tail Index Estimation: Quantile-Driven Threshold Selection Staff Working Paper/Document de travail du personnel 2019-28 Tail Index Estimation: Quantile- Driven Threshold Selection by Jon Danielsson, Lerby M. Ergun, Laurens de Haan and Casper G. de Vries Bank of Canada staff working papers provide a forum for staff to publish work-in-progress research independently from the Bank’s Governing Council. This research may support or challenge prevailing policy orthodoxy. Therefore, the views expressed in this paper are solely those of the authors and may differ from official Bank of Canada views. No responsibility for them should be attributed to the Bank. www.bank-banque-canada.ca Bank of Canada Staff Working Paper 2019-28 August 2019 Tail Index Estimation: Quantile-Driven Threshold Selection by Jon Danielsson,1 Lerby M. Ergun,2 Laurens de Haan3 and Casper G. de Vries4 1 London School of Economics 2 Financial Markets Department Bank of Canada Ottawa, Ontario, Canada K1A 0G9 [email protected] 3 Erasmus University Rotterdam 4 Erasums University Rotterdam Tinbergen Institute ISSN 1701-9397 © 2019 Bank of Canada Acknowledgements Corresponding author Lerby M. Ergun. We thank the Netherlands Organisation for Scientific Research Mozaiek grant [grant number: 017.005.108] and the support of the Economic and Social Research Council (ESRC) in funding the Systemic Risk Centre [grant number ES/K002309/1] i Abstract The selection of upper order statistics in tail estimation is notoriously difficult. Methods that are based on asymptotic arguments, like minimizing the asymptotic MSE, do not perform well in finite samples. Here, we advance a data-driven method that minimizes the maximum distance between the fitted Pareto type tail and the observed quantile. To analyze the finite sample properties of the metric, we perform rigorous simulation studies. In most cases, the finite sample-based methods perform best. To demonstrate the economic relevance of choosing the proper methodology, we use daily equity return data from the CRSP database and find economically relevant variation between the tail index estimates. Bank topics: Financial stability; Econometric and statistical methods JEL codes: C01, C14, C58 ii 1 Introduction In various research fields, like geography and economics, distributions ex- hibit heavy tails, e.g., the scaling behaviour described by the power laws of Pareto. In this literature, the tail index, α, is the shape parameter in the power that determines how heavy the tail is (a higher α corresponds to a thinner tail). The most popular estimator for the tail index is by Hill (1975), which is the quasi-maximum likelihood estimator. In the statistical literature, there is an ongoing debate on the number of tail observations k that are used in the estimation of α. The choice of k leads to a trade-off between the bias and variance of the estimator. Although practitioners tend to use \Eye-Ball" methods (Resnick and Starica, 1997), the typical approach suggested in the statistical literature is choosing k by minimizing the asymp- totic mean squared error (MSE). The methods that are used to find k are asymptotically consistent but have unsatisfactory finite sample properties. This paper proposes a novel approach for choosing the optimal k, labelled k∗. The methodology is based on fitting the tail of a heavy-tailed distribution by minimizing the maximum deviation in the quantile dimension. We show this method outperforms other methods put forth in the literature. Hall(1990) and Danielsson et al.(2001) use a bootstrap procedure to mini- mize the MSE. Drees and Kaufmann(1998) exploit the same bias and vari- ance trade-off but use the maximum random fluctuation of the estimator to locate the point where the trade-off is optimal. These methods are based on asymptotic arguments, but these methods may not perform very well in finite samples. The shortcomings of the currently available methods motivate our new ap- proach. In this paper, we propose using the penalty function of the Kolmogorov- Smirnov (KS) test statistic to fit the tail of the distribution. The KS statistic is measured as the maximum difference between the empirical distribution and a parametric distribution in the probability dimension. Clauset et al. (2009) use this metric to find the optimal k. In a response, Drees et al. (2018) find that the KS statistic chooses values that are too low for the op- timal k and therefore introduces a large variances for the Hill estimate. In this paper, we use KS statistic, but with a twist. Instead of minimizing the distance between the empirical distribution and the parametric distribution in the probability dimension, we minimize the distance in the quantile di- mension. This measure is henceforth referred to as the KS-distance metric. All heavy-tailed distributions in the sense of regular variation (see below) are characterized by ultimate power law behaviour. For a large subset of 2 distributions, the tails, to a first-order expansion at infinity, correspond to the tail of a Pareto distribution. The estimates of the shape parameter α use k of the highest order statistics, assuming that these order statistics fit well to a Pareto tail. By varying k, we are able to simultaneously fit the empirical distribution and elicit k∗. The particular choice of the metric is mainly motivated by the fact that small deviations in the probability distribution lead to increasingly large dis- tortions in the quantile dimension. Deep into the tail, the difference in the probability dimension is of order 1=n, but in the quantile dimension this is of 1=α order (n=k) . This effect is amplified as one moves deeper into the tail of the distribution. Basing the metric in the quantile domain therefore puts a natural emphasis on fitting tail observations as opposed to the centre obser- vations. Furthermore, by focusing on the maximum, the metric is not diluted by the numerous centre observations. These intuitive arguments are backed by rigorous simulation analyses and tests. In the simulation study, we draw i.i.d. samples from the Fr´echet, sym- metric stable and the Student-t distribution. For these distribution families, Hall's second-order expansion of the cumulative distribution function (CDF) holds. Given this expansion, Hall and Welsh(1985) derive the theoretical k∗, which minimizes the asymptotic MSE. This allows us to study the relation- ship between the chosen k∗ and the estimated tail index. Furthermore, we employ dependent time series in the simulation exercise. The autoregressive conditional heteroskedastic (ARCH) stochastic processes by Engle(1982) model the volatility clustering in time-series data. For the ARCH process, we do not know the variance and the bias of the Hill estimator. However, the tail index is known. Therefore, we are able to evaluate the performance of the methods under the clustering of extremes. To test the performance of the KS-distance metric, we analyze it from four different perspectives. Firstly, we test the KS-distance metric against vari- ous other penalty functions. We use the MSE, mean absolute error and the metric used in Dietrich et al.(2002) to benchmark the performance of our penalty function. To evaluate the ability of different metrics to correctly penalize errors, we use results derived by Hall and Welsh(1985). Hall and Welsh derive the theoretical k∗ for the class of distributions that adhere to Hall's second-order expansion of the CDF. These results stipulate the be- haviour of k∗ as a function of the sample size and α. 3 Secondly, we contrast the performance of the KS-distance metric with the other existing methods. We find that the KS-distance metric and the auto- mated Eye-Ball method outperform other heuristic and statistical method- ologies. For the Student-t, symmetric stable and the Fr´echet distributions, the competing methodologies do not reproduce the expected patterns for k∗ as derived by Hall and Welsh(1985). This leads to a larger bias in ^α for less- heavy-tailed distributions. The theoretically motivated methodologies often choose a high value for k∗ (too many order statistics close to the centre of the distribution), which corresponds to a large bias. For the ARCH processes, the methods by Drees and Kaufmann(1998), Danielsson et al.(2001) and the fixed sample fraction introduce a large bias in the Hill estimator. For the dependent time series, the automated Eye-Ball method and the KS-distance metric produce small biases. This is comforting as most economic time series exhibit volatility clustering. In addition to estimating the tail index, we model the quantile estimates for the various competing methods, since the ultimate goal is to produce/obtain reliable estimates of the possible losses. When we analyze the quantile esti- mates, the KS-distance metric and the automated Eye-Ball method perform best for the quantiles greater than the 99% probability level. For the quan- tiles further towards the centre of the distribution, other methods produce smaller errors. Both the KS-distance metric and the automated Eye-Ball method have a tendency to pick a small k∗ and therefore fit the tail close to the maximum. The other methodologies often use a larger number of order statistics and consequently fit better closer towards the centre of the distri- bution. In the last section of this paper, we show that the choice of k∗ can be eco- nomically important. For this purpose we use individual daily stock price in- formation from the Centre for Research in Security Prices (CRSP) database. For 17,918 individual stocks, we estimate the left and right tail index of the equity returns. We measure the average absolute difference between the es- timated α and find that the average difference between the methods ranges from 0.39 to 1.44. These differences are outside of the confidence interval of the Hill estimator.
Recommended publications
  • Probability Density Function of Steady State Concentration in Two-Dimensional Heterogeneous Porous Media Olaf A
    WATER RESOURCES RESEARCH, VOL. 47, W11523, doi:10.1029/2011WR010750, 2011 Probability density function of steady state concentration in two-dimensional heterogeneous porous media Olaf A. Cirpka,1 Felipe P. J. de Barros,2 Gabriele Chiogna,3 and Wolfgang Nowak4 Received 4 April 2011; revised 27 September 2011; accepted 8 October 2011; published 23 November 2011. [1] Spatial variability of hydraulic aquifer parameters causes meandering, squeezing, stretching, and enhanced mixing of steady state plumes in concentrated hot-spots of mixing. Because the exact spatial distribution of hydraulic parameters is uncertain, the spatial distribution of enhanced mixing rates is also uncertain. We discuss how relevant the resulting uncertainty of mixing rates is for predicting concentrations. We develop analytical solutions for the full statistical distribution of steady state concentration in two-dimensional, statistically uniform domains with log-hydraulic conductivity following an isotropic exponential model. In particular, we analyze concentration statistics at the fringes of wide plumes, conceptually represented by a solute introduced over half the width of the domain. Our framework explicitly accounts for uncertainty of streamline meandering and uncertainty of effective transverse mixing (defined at the Darcy scale). We make use of existing low-order closed-form expressions that lead to analytical expressions for the statistical distribution of local concentration values. Along the expected position of the plume fringe, the concentration distribution strongly clusters at its extreme values. This behavior extends over travel distances of up to tens of integral scales for the parameters tested in our study. In this regime, the uncertainty of effective transverse mixing is substantial enough to have noticeable effects on the concentration probability density function.
    [Show full text]
  • Chapter 2 — Risk and Risk Aversion
    Chapter 2 — Risk and Risk Aversion The previous chapter discussed risk and introduced the notion of risk aversion. This chapter examines those concepts in more detail. In particular, it answers questions like, when can we say that one prospect is riskier than another or one agent is more averse to risk than another? How is risk related to statistical measures like variance? Risk aversion is important in finance because it determines how much of a given risk a person is willing to take on. Consider an investor who might wish to commit some funds to a risk project. Suppose that each dollar invested will return x dollars. Ignoring time and any other prospects for investing, the optimal decision for an agent with wealth w is to choose the amount k to invest to maximize [uw (+− kx( 1) )]. The marginal benefit of increasing investment gives the first order condition ∂u 0 = = u′( w +− kx( 1)) ( x − 1) (1) ∂k At k = 0, the marginal benefit isuw′( ) [ x − 1] which is positive for all increasing utility functions provided the risk has a better than fair return, paying back on average more than one dollar for each dollar committed. Therefore, all risk averse agents who prefer more to less should be willing to take on some amount of the risk. How big a position they might take depends on their risk aversion. Someone who is risk neutral, with constant marginal utility, would willingly take an unlimited position, because the right-hand side of (1) remains positive at any chosen level for k. For a risk-averse agent, marginal utility declines as prospects improves so there will be a finite optimum.
    [Show full text]
  • Inequality, Poverty, and Stochastic Dominance by Russell Davidson
    Inequality, Poverty, and Stochastic Dominance by Russell Davidson Department of Economics and CIREQ AMSE and GREQAM McGill University Centre de la Vieille Charit´e Montr´eal,Qu´ebec, Canada 2 rue de la Charit´e H3A 2T7 13236 Marseille cedex 02, France email: [email protected] February 2017 The First Step: Corrado Gini Corrado Gini (May 23, 1884 { 13 March 13, 1965) was an Italian statistician, demog- rapher and sociologist who developed the Gini coefficient, a measure of the income inequality in a society. Gini was born on May 23, 1884 in Motta di Livenza, near Treviso, into an old landed family. He entered the Faculty of Law at the University of Bologna, where in addition to law he studied mathematics, economics, and biology. In 1929 Gini founded the Italian Committee for the Study of Population Problems (Comitato italiano per lo studio dei problemi della popolazione) which, two years later, organised the first Population Congress in Rome. In 1926 he was appointed President of the Central Institute of Statistics in Rome. This he organised as a single centre for Italian statistical services. He resigned in 1932 in protest at interference in his work by the fascist state. Stochastic Dominance 1 The relative mean absolute difference Corrado Gini himself (1912) thought of his index as the average difference between two incomes, divided by twice the average income. The numerator is therefore the double sum Xn Xn jyi − yjj; i=1 j=1 where the yi, i = 1; : : : ; n are the incomes of a finite population of n individuals, divided by the number of income pairs.
    [Show full text]
  • MS-A0503 First Course in Probability and Statistics
    MS-A0503 First course in probability and statistics 2B Standard deviation and correlation Jukka Kohonen Deparment of mathematics and systems analysis Aalto SCI Academic year 2020{2021 Period III Contents Standard deviation Probability of large differences from mean (Chebyshev) Covariance and correlation Expectation tells only about location of distribution For a random number X , the expected value (mean) µ = E(X ): • is the probability-weighted average of X 's possible values, P R x x f (x) or x f (x)dx • is roughly a central location of the distribution • approximates the long-run average of independent random numbers that are distributed like X • tells nothing about the width of the distribution Example Some discrete distributions with the same expectation 1: k 1 k 0 1 2 1 1 P(X = k) 1 P(Z = k) 2 0 2 k 0 1 2 k 0 1000000 1 1 1 P(Y = k) 3 3 3 P(W = k) 0.999999 0.000001 How to measure the difference of X from its expectation? (First attempt.) The absolute difference of X from its mean µ = E(X ) is a random variable jX − µj. E.g. fair die, µ = 3:5, if we obtain X = 2, then X − µ = −1:5. The mean absolute difference E jX − µj : • 1 Pn approximates the long-run average n i=1 jXi − µj, from independent random numbers distributed like X • 1 e.g. fair die: 6 (2:5 + 1:5 + 0:5 + 0:5 + 1:5 + 2:5) = 1:5. • is mathematically slightly inconvenient, because (among other things) the function x 7! jxj is not differentiable at zero.
    [Show full text]
  • Arxiv:2106.03979V1 [Stat.AP] 7 Jun 2021
    Received: Added at production Revised: Added at production Accepted: Added at production DOI: xxx/xxxx RESEARCH ARTICLE Scalar on time-by-distribution regression and its application for modelling associations between daily-living physical activity and cognitive functions in Alzheimer’s Disease Rahul Ghosal*1 | Vijay R. Varma2 | Dmitri Volfson3 | Jacek Urbanek4 | Jeffrey M. Hausdorff5,7,8 | Amber Watts6 | Vadim Zipunnikov1 1Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Summary Baltimore, Maryland USA Wearable data is a rich source of information that can provide deeper understand- 2National Institute on Aging (NIA), National Institutes of Health (NIH), Baltimore, ing of links between human behaviours and human health. Existing modelling Maryland, USA approaches use wearable data summarized at subject level via scalar summaries 3Neuroscience Analytics, Computational Biology, Takeda, Cambridge, MA, USA using regression techniques, temporal (time-of-day) curves using functional data 4Department of Medicine, Johns Hopkins analysis (FDA), and distributions using distributional data analysis (DDA). We pro- University School of Medicine, Baltimore pose to capture temporally local distributional information in wearable data using Maryland, USA 5Center for the Study of Movement, subject-specific time-by-distribution (TD) data objects. Specifically, we propose Cognition and Mobility, Neurological scalar on time-by-distribution regression (SOTDR) to model associations between Institute, Tel Aviv Sourasky Medical scalar response of interest such as health outcomes or disease status and TD predic- Center, Tel Aviv, Israel 6Department of Psychology, University of tors. We show that TD data objects can be parsimoniously represented via a collection Kansas, Lawrence, KS, USA of time-varying L-moments that capture distributional changes over the time-of-day.
    [Show full text]
  • The GINI Coefficient: Measuring Income Distribution
    The GINI coefficient: Measuring Income Distribution NAME : SUDEEPA HALDER MTR NU. : 3098915 FB : 09 TERM : SS 2017 COURSE : INTERNATIONAL MANAGEMENT I +II INTRODUCTION "The Gini coefficient (sometimes expressed as a Gini ratio or a normalized Gini index) is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation's residents, and is the most commonly used measure of inequality of income or wealth. It was developed by the Italian statistician and sociologist Corrado Gini and published in his 1912 paper Variability and Mutability." [1] "The Gini index is the Gini coefficient expressed as a percentage, and is equal to the Gini coefficient multiplied by 100. (The Gini coefficient is equal to half of the relative mean difference.) The Gini coefficient is often used to measure income inequality. The value of Gini Coefficient ranges from 0 to 1. Here, 0 corresponds to perfect income equality (i.e. everyone has the same income) and 1 corresponds to perfect income inequality (i.e. one person has all the income, while everyone else has zero income). Normally the mean (or total) is assumed to be positive, which rules out a Gini coefficient being less than zero or negative.The Gini coefficient can also be used to measure the wealth inequality. This function requires that no one has a negative net wealth. It is also commonly used for the measurement of discriminatory power of rating systems in the credit risk management." [See also 2] Graphical Representation: "Mathematically the Gini coefficient can be usually defined based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x% of the population (see graph).
    [Show full text]
  • Statistical Normalization Methods in Interpersonal and Intertheoretic
    Statistical normalization methods in interpersonal and intertheoretic comparisons1 For a long time, economists and philosophers have puzzled over the problem of interpersonal comparisons of utility.2 As economists typically use the term, utility is a numerical representation of an individual’s preference ordering. If those preferences satisfy the von Neumann-Morgenstern axioms, then her preferences may be represented by an interval scale measurable utility function. However, as such the unit of utility for each individual is arbitrary: from individuals’ utility functions alone, there is therefore no meaning to the claim that the difference in utility between coffee and tea for Alice is twice as great as the difference in utility between beer and vodka for Bob, or for any claim that makes comparisons of differences in utility between two or more individuals. Yet it seems that we very often can make comparisons of preference-strength between people. And if we wish to endorse an aggregative theory like utilitarianism or prioritarianism or egalitarianism, combined with a preference-satisfaction account of wellbeing, then we need to be able to make such comparisons. 1 We wish to thank Stuart Armstrong, Hilary Greaves, Stefan Riedener, Bastian Stern, Christian Tarsney and Aron Vallinder for helpful discussions and comments. We are especially grateful to Max Daniel for painstakingly checking the proofs and suggesting several important improvements. 2 See Ken Binmore, “Interpersonal Comparison of Utility,” in Don Ross and Harold Kincaid, eds., The Oxford Handbook of Philosophy of Economics (Oxford: Oxford University Press, 2009), pp. 540-559 for an overview. More recently, a formally analogous problem has appeared in discussions of normative uncertainty.
    [Show full text]
  • Smoothing of Histograms
    Analysis of Uncertain Data: Smoothing of Histograms Eugene Fink Ankur Sarin Jaime G. Carbonell [email protected] [email protected] [email protected] www.cs.cmu.edu/~eugene www.cs.cmu.edu/~jgc Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, United States Abstract—We consider the problem of converting a set of numeric data points into a smoothed approximation of II. PROBLEM the underlying probability distribution. We describe a We consider the task of smoothing a probability distribution. representation of distributions by histograms with vari- Specifically, we need to develop a procedure that inputs a able-width bars, and give a greedy smoothing algorithm point set or a fine-grained histogram, and constructs a based on this representation. coarser-grained histogram. We explain the representation of Keywords—Probability distributions, smoothing, com- probability distributions in the developed system and then pression, visualization, histograms. formalize the smoothing problem. Representation: We encode an approximated probability I. INTRODUCTION density function by a histogram with variable-width bars, HEN analyzing a collection of numeric measurements, which is called a piecewise-uniform distribution and abbre- Wwe often need to represent it as a histogram that shows viated as PU-distribution. This representation is a set of dis- the underlying distribution. For example, consider the set of joint intervals, with a probability assigned to each interval, as 5000 numeric points in Figure 9(a) on the next-to-last page of shown in Figure 1 [Bardak et al., 2006a; Fu et al., 2008]. We the paper. We may convert it to a fine-grained histogram in assume that all values within a specific interval are equally likely; thus, every interval is a uniform distribution, and the Figure 9(b), to a coarser version in Figure 9(c), or to a very histogram comprises multiple such distributions.
    [Show full text]
  • Akash Sharma for the Degree of Master of Science in Industrial Engineering and Computer Science Presented on December 12
    AN ABSTRACT OF THE THESIS OF Akash Sharma for the degree of Master of Science in Industrial Engineering and Computer Science presented on December 12. 2003. Title: Effectiveness of using Two and Three-parameter Distributions in Place of "Best-fit Distributions" in Discrete Event Simulation Models of Production Lines. Abstract approved: David S. Kim This study presents the results of using common two or three-parameter "default" distributions in place of "best fit distributions" in simulations of serial production lines with finite buffers and blocking. The default distributions used instead of the best-fit distribution are chosen such that they are non-negative, unbounded, and can match either the first two moments or the first three moments of the collected data. Furthermore, the selected default distributions must be commonly available (or easily constructed from) distributions in simulation software packages. The lognormal is used as the two- parameter distribution to match the first two moments of the data. The two-level hyper- exponential and three-parameter lognormal are used as three-parameter distributions to match the first three moments of the data. To test the use of these distributions in simulations, production lines have been separated into two major classes: automated and manual. In automated systems the workstations have fixed processing times and random time between failures, and random repair times. In manual systems, the workstations are reliable but have random processing times. Results for both classes of lines show that the differences in throughput from simulations using best-fit distributions and two parameter lognormal is small in some cases and can be reduced in others by matching the first three moments of the data.
    [Show full text]
  • Chapter 4 RANDOM VARIABLES
    Chapter 4 RANDOM VARIABLES Experiments whose outcomes are numbers EXAMPLE: Select items at random from a batch of size N until the first defective item is found. Record the number of non-defective items. Sample Space: S = {0, 1, 2,...,N} The result from the experiment becomes a variable; that is, a quantity taking different values on different occasions. Because the experiment involves selection at random, we call it a random variable. Abbreviation: rv Notation: capital letter, often X. DEFINITION: The set of possible values that a random variable X can take is called the range of X. EQUIVALENCES Unstructured Random Experiment Variable E X Sample space range of X Outcome of E One possible value x for X Event Subset of range of X Event A x ∈ subset of range of X e.g., x = 3 or 2 ≤ x ≤ 4 Pr(A) Pr(X = 3), Pr(2 ≤ X ≤ 4) REMINDER: The set of possible values that a random variable (rv) X can take is called the range of X. DEFINITION: A rv X is said to be discrete if its range consists of a finite or countable number of values. Examples: based on tossing a coin repeatedly No. of H in 1st 5 tosses: {0, 1, 2, . ., 5} No. of T before first H: {0, 1, 2,....} Note: although the definition of a discrete rv allows it to take any finite or countable set of values, the values are in practice almost always integers. Probability Distributions or ‘How to describe the behaviour of a rv’ Suppose that the only values a random variable X can take are x1, x2, .
    [Show full text]
  • Is the Pareto-Lévy Law a Good Representation of Income Distributions?
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Dagsvik, John K.; Jia, Zhiyang; Vatne, Bjørn H.; Zhu, Weizhen Working Paper Is the Pareto-Lévy law a good representation of income distributions? LIS Working Paper Series, No. 568 Provided in Cooperation with: Luxembourg Income Study (LIS) Suggested Citation: Dagsvik, John K.; Jia, Zhiyang; Vatne, Bjørn H.; Zhu, Weizhen (2011) : Is the Pareto-Lévy law a good representation of income distributions?, LIS Working Paper Series, No. 568, Luxembourg Income Study (LIS), Luxembourg This Version is available at: http://hdl.handle.net/10419/95606 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu LIS Working Paper Series No. 568 Is the Pareto-Lévy Law a Good Representation of Income Distributions? John K.
    [Show full text]
  • The Application of Quantitative Trade-Off Analysis to Guide Efficient Medicine Use
    The application of quantitative trade-off analysis to guide efficient medicine use within the UK NHS by Pritesh Naresh Bodalia MPharm., PgCertPP., PgDipGPP., MSc., MRPharmS. A thesis submitted for the award of PhD Faculty of Population Health Sciences Institute of Epidemiology and Health Care University College London March 2018 I, Pritesh Naresh Bodalia confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis. Abstract Abstract Background: The UK National Health Service (NHS) currently spends in excess of £17 billion per annum on medicines. To ensure sustainability, medicines with limited clinical value should not be routinely prescribed. The randomised controlled trial (RCT) aims to meet regulatory standards, therefore it has limited value to guide practice when multiple agents from the same therapeutic class are available. The quantitative trade-off analysis, with consideration to efficacy, safety, tolerability and / or cost, may enable the translation of regulatory data to clinically-relevant data for new and existing medicines. Methods: This concept aims to provide clarity to clinicians and guideline developers on the efficient use of new medicines where multiple options exist for the same licensed indication. Research projects of clinical relevance were identified from an academically led London Area Prescribing Committee (APC). Therapeutic areas included refractory epilepsy, hypertension and heart failure, overactive bladder syndrome, and atrial fibrillation. Frequentist and / or Bayesian meta-analysis were performed and supplemented with a trade-off analysis with parameters determined in consultation with field experts. Results: The trade-off analysis was able to establish a rank order of treatments considered thereby providing clarification to decision-makers on the DTC / APC panel where regulatory data could not.
    [Show full text]