The Application of Quantitative Trade-Off Analysis to Guide Efficient Medicine Use

Total Page:16

File Type:pdf, Size:1020Kb

The Application of Quantitative Trade-Off Analysis to Guide Efficient Medicine Use The application of quantitative trade-off analysis to guide efficient medicine use within the UK NHS by Pritesh Naresh Bodalia MPharm., PgCertPP., PgDipGPP., MSc., MRPharmS. A thesis submitted for the award of PhD Faculty of Population Health Sciences Institute of Epidemiology and Health Care University College London March 2018 I, Pritesh Naresh Bodalia confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis. Abstract Abstract Background: The UK National Health Service (NHS) currently spends in excess of £17 billion per annum on medicines. To ensure sustainability, medicines with limited clinical value should not be routinely prescribed. The randomised controlled trial (RCT) aims to meet regulatory standards, therefore it has limited value to guide practice when multiple agents from the same therapeutic class are available. The quantitative trade-off analysis, with consideration to efficacy, safety, tolerability and / or cost, may enable the translation of regulatory data to clinically-relevant data for new and existing medicines. Methods: This concept aims to provide clarity to clinicians and guideline developers on the efficient use of new medicines where multiple options exist for the same licensed indication. Research projects of clinical relevance were identified from an academically led London Area Prescribing Committee (APC). Therapeutic areas included refractory epilepsy, hypertension and heart failure, overactive bladder syndrome, and atrial fibrillation. Frequentist and / or Bayesian meta-analysis were performed and supplemented with a trade-off analysis with parameters determined in consultation with field experts. Results: The trade-off analysis was able to establish a rank order of treatments considered thereby providing clarification to decision-makers on the DTC / APC panel where regulatory data could not. The results, presented as a hierarchy of treatments, enabled modifications to prescribing trends within North Central London as the pilot site, resulting in significant cost avoidance and cost savings for the NHS. Conclusions: The quantitative trade-off analysis was able to resolve concerns raised by the DTC / APC panel via translation of regulatory data to clinically-relevant data with consideration to defined benefits and harms. Results were implemented successfully within a local pilot health economy. This approach is recommended as an extension of existing methods required by regulatory agencies in the assessment and licensing of new medicines. - 3 - Impact Statement Impact Statement The UK National Health Service (NHS) currently spends 15% of its £116 billion budget per annum on medicines; this is in excess of £17 billion, having grown from £13 billion in 2011-12. To ensure that the most effective medicines are available to patients, provider Acute Trusts have established local and / or regional Drugs and Therapeutics Committees (DTCs) to maintain a local formulary. Despite being the gold standard in evidence based medicine, randomised controlled trials (RCT) are designed to meet regulatory standards; therefore it has limited applicability in guiding clinical practice, particularly when multiple agents from the same therapeutic class are available. Guidance published by the National Institute for Health and Clinical Excellence (NICE) determines the clinical and cost-effectiveness of each individual new medicine, where the output is either the recommendation or non- recommendation of said medicine; where it is recommended it has to be made available for prescribing among other recommendations. Establishing the place in therapy for new and / or existing treatment as part of the DTC review is therefore often unclear. The UCL Research Department of Epidemiology & Public Health aim to provide leading and multi-disciplinary research with impact on real-world policy and practice. The quantitative trade-off analysis is a concept that gives consideration to key efficacy versus acceptability versus cost parameters for a range of similar medicines licensed for a given therapeutic indication, and aims to provide clarity to clinicians and guideline developers where multiple options exist. The methodology employed in order to undertake this analysis is based on Bayesian statistics, using network meta-analysis (NMA) techniques, via the WinBUGS software. Specific therapeutics areas explored include: refractory epilepsy; cardiovascular disease (hypertension and heart failure); overactive bladder syndrome; and atrial fibrillation. The research themes and projects were conceived in conjunction with a multi- disciplinary Area Prescribing Committee (regional DTC) within London and made possible via effective collaboration across a number of academic organisations including University College London, UCL Hospitals, University of Bristol, and the London School of Hygiene and Tropical Medicine. Specific areas of collaboration included individuals with a background in statistics - 4 - Impact Statement modelling and methodology, clinical / medical, health economics, and patient partnerships. The application of the quantitative trade-off analysis has successfully aided in the establishment of a hierarchy of treatments where regulatory data could not, and has been used to change practice across a pilot area of the UK NHS resulting in cost savings or cost-avoidance without compromising on patient care. Indirect comparisons, generated through NMA as conducted across a number of clinical specialities, not only helped develop rational treatment hierarchies, but also provide guidance on the choice of high priority comparator agents for direct head-to-head analysis in the form of an RCT. This approach is recommended as an extension of existing methods required by regulatory agencies in the assessment and licensing of new medicines. The outputs of the analyses conducted have been published in leading medical journals to inform wider practices. - 5 - Acknowledgements Acknowledgements I am grateful to Prof Aroon Hingorani and Prof Juan-Pablo Casas for their supervision, time and input. Similarly, Prof Raymond MacAllister and Dr Anthony Grosso have been invaluable in their assistance and enthusiasm throughout. I am also particularly grateful to Dr Robert Urquhart and Mr Ashik Shah for their continued mentorship for my research and development activities. I would also like to thank Mr David Wonderling for his advice and tuition on the use of WinBUGS in order to perform the Bayesian Network Meta-Analyses within this thesis and Dr Nicola Welton for her advice and support on developing techniques to the standard of the NICE [Technical] Decision Support Unit. In addition, I am indebted to Dr Michael Scott and the health economics team at University of Bristol for their contribution to the thesis in performing cost-utility analyses. I offer thanks to Prof Jonathan Sterne for leading the NIHR project which forms a section of this thesis, and to my teams, both within UCL Hospitals Pharmacy and from the NCL Joint Formulary Committee, for their support and understanding. I am thankful to the numerous experienced academics who have given time to perceptively critique this research, including the unknown peer-reviewers selected by the medical journal editors. Finally, I am, as ever, grateful to my family, wife and two boys for their continued encouragement without exception. - 6 - Table of Contents Table of Contents ABSTRACT............................................................................................................................... 3 IMPACT STATEMENT ............................................................................................................. 4 ACKNOWLEDGEMENTS ......................................................................................................... 6 TABLE OF CONTENTS ........................................................................................................... 7 LIST OF ABBREVIATIONS....................................................................................................13 LIST OF FIGURES .................................................................................................................16 LIST OF TABLES ...................................................................................................................21 1.0 CHAPTER 1: INTRODUCTION ..................................................................................25 1.1 INTRODUCTORY STATEMENT ......................................................................................25 1.2 DRUG APPRAISAL PROCESS IN CLINICAL TRIALS .........................................................26 1.2.1 Phase I clinical trials ........................................................................................26 1.2.2 Phase II clinical trials .......................................................................................27 1.2.3 Phase III clinical trials ......................................................................................27 1.2.4 Phase IV clinical trials .....................................................................................27 1.3 PHARMACEUTICAL INDUSTRY .....................................................................................28 1.3.1 Design Bias .....................................................................................................28 1.3.2 Data Interpretation...........................................................................................29
Recommended publications
  • Probability Density Function of Steady State Concentration in Two-Dimensional Heterogeneous Porous Media Olaf A
    WATER RESOURCES RESEARCH, VOL. 47, W11523, doi:10.1029/2011WR010750, 2011 Probability density function of steady state concentration in two-dimensional heterogeneous porous media Olaf A. Cirpka,1 Felipe P. J. de Barros,2 Gabriele Chiogna,3 and Wolfgang Nowak4 Received 4 April 2011; revised 27 September 2011; accepted 8 October 2011; published 23 November 2011. [1] Spatial variability of hydraulic aquifer parameters causes meandering, squeezing, stretching, and enhanced mixing of steady state plumes in concentrated hot-spots of mixing. Because the exact spatial distribution of hydraulic parameters is uncertain, the spatial distribution of enhanced mixing rates is also uncertain. We discuss how relevant the resulting uncertainty of mixing rates is for predicting concentrations. We develop analytical solutions for the full statistical distribution of steady state concentration in two-dimensional, statistically uniform domains with log-hydraulic conductivity following an isotropic exponential model. In particular, we analyze concentration statistics at the fringes of wide plumes, conceptually represented by a solute introduced over half the width of the domain. Our framework explicitly accounts for uncertainty of streamline meandering and uncertainty of effective transverse mixing (defined at the Darcy scale). We make use of existing low-order closed-form expressions that lead to analytical expressions for the statistical distribution of local concentration values. Along the expected position of the plume fringe, the concentration distribution strongly clusters at its extreme values. This behavior extends over travel distances of up to tens of integral scales for the parameters tested in our study. In this regime, the uncertainty of effective transverse mixing is substantial enough to have noticeable effects on the concentration probability density function.
    [Show full text]
  • Chapter 2 — Risk and Risk Aversion
    Chapter 2 — Risk and Risk Aversion The previous chapter discussed risk and introduced the notion of risk aversion. This chapter examines those concepts in more detail. In particular, it answers questions like, when can we say that one prospect is riskier than another or one agent is more averse to risk than another? How is risk related to statistical measures like variance? Risk aversion is important in finance because it determines how much of a given risk a person is willing to take on. Consider an investor who might wish to commit some funds to a risk project. Suppose that each dollar invested will return x dollars. Ignoring time and any other prospects for investing, the optimal decision for an agent with wealth w is to choose the amount k to invest to maximize [uw (+− kx( 1) )]. The marginal benefit of increasing investment gives the first order condition ∂u 0 = = u′( w +− kx( 1)) ( x − 1) (1) ∂k At k = 0, the marginal benefit isuw′( ) [ x − 1] which is positive for all increasing utility functions provided the risk has a better than fair return, paying back on average more than one dollar for each dollar committed. Therefore, all risk averse agents who prefer more to less should be willing to take on some amount of the risk. How big a position they might take depends on their risk aversion. Someone who is risk neutral, with constant marginal utility, would willingly take an unlimited position, because the right-hand side of (1) remains positive at any chosen level for k. For a risk-averse agent, marginal utility declines as prospects improves so there will be a finite optimum.
    [Show full text]
  • Inequality, Poverty, and Stochastic Dominance by Russell Davidson
    Inequality, Poverty, and Stochastic Dominance by Russell Davidson Department of Economics and CIREQ AMSE and GREQAM McGill University Centre de la Vieille Charit´e Montr´eal,Qu´ebec, Canada 2 rue de la Charit´e H3A 2T7 13236 Marseille cedex 02, France email: [email protected] February 2017 The First Step: Corrado Gini Corrado Gini (May 23, 1884 { 13 March 13, 1965) was an Italian statistician, demog- rapher and sociologist who developed the Gini coefficient, a measure of the income inequality in a society. Gini was born on May 23, 1884 in Motta di Livenza, near Treviso, into an old landed family. He entered the Faculty of Law at the University of Bologna, where in addition to law he studied mathematics, economics, and biology. In 1929 Gini founded the Italian Committee for the Study of Population Problems (Comitato italiano per lo studio dei problemi della popolazione) which, two years later, organised the first Population Congress in Rome. In 1926 he was appointed President of the Central Institute of Statistics in Rome. This he organised as a single centre for Italian statistical services. He resigned in 1932 in protest at interference in his work by the fascist state. Stochastic Dominance 1 The relative mean absolute difference Corrado Gini himself (1912) thought of his index as the average difference between two incomes, divided by twice the average income. The numerator is therefore the double sum Xn Xn jyi − yjj; i=1 j=1 where the yi, i = 1; : : : ; n are the incomes of a finite population of n individuals, divided by the number of income pairs.
    [Show full text]
  • Tail Index Estimation: Quantile-Driven Threshold Selection
    Staff Working Paper/Document de travail du personnel 2019-28 Tail Index Estimation: Quantile- Driven Threshold Selection by Jon Danielsson, Lerby M. Ergun, Laurens de Haan and Casper G. de Vries Bank of Canada staff working papers provide a forum for staff to publish work-in-progress research independently from the Bank’s Governing Council. This research may support or challenge prevailing policy orthodoxy. Therefore, the views expressed in this paper are solely those of the authors and may differ from official Bank of Canada views. No responsibility for them should be attributed to the Bank. www.bank-banque-canada.ca Bank of Canada Staff Working Paper 2019-28 August 2019 Tail Index Estimation: Quantile-Driven Threshold Selection by Jon Danielsson,1 Lerby M. Ergun,2 Laurens de Haan3 and Casper G. de Vries4 1 London School of Economics 2 Financial Markets Department Bank of Canada Ottawa, Ontario, Canada K1A 0G9 [email protected] 3 Erasmus University Rotterdam 4 Erasums University Rotterdam Tinbergen Institute ISSN 1701-9397 © 2019 Bank of Canada Acknowledgements Corresponding author Lerby M. Ergun. We thank the Netherlands Organisation for Scientific Research Mozaiek grant [grant number: 017.005.108] and the support of the Economic and Social Research Council (ESRC) in funding the Systemic Risk Centre [grant number ES/K002309/1] i Abstract The selection of upper order statistics in tail estimation is notoriously difficult. Methods that are based on asymptotic arguments, like minimizing the asymptotic MSE, do not perform well in finite samples. Here, we advance a data-driven method that minimizes the maximum distance between the fitted Pareto type tail and the observed quantile.
    [Show full text]
  • MS-A0503 First Course in Probability and Statistics
    MS-A0503 First course in probability and statistics 2B Standard deviation and correlation Jukka Kohonen Deparment of mathematics and systems analysis Aalto SCI Academic year 2020{2021 Period III Contents Standard deviation Probability of large differences from mean (Chebyshev) Covariance and correlation Expectation tells only about location of distribution For a random number X , the expected value (mean) µ = E(X ): • is the probability-weighted average of X 's possible values, P R x x f (x) or x f (x)dx • is roughly a central location of the distribution • approximates the long-run average of independent random numbers that are distributed like X • tells nothing about the width of the distribution Example Some discrete distributions with the same expectation 1: k 1 k 0 1 2 1 1 P(X = k) 1 P(Z = k) 2 0 2 k 0 1 2 k 0 1000000 1 1 1 P(Y = k) 3 3 3 P(W = k) 0.999999 0.000001 How to measure the difference of X from its expectation? (First attempt.) The absolute difference of X from its mean µ = E(X ) is a random variable jX − µj. E.g. fair die, µ = 3:5, if we obtain X = 2, then X − µ = −1:5. The mean absolute difference E jX − µj : • 1 Pn approximates the long-run average n i=1 jXi − µj, from independent random numbers distributed like X • 1 e.g. fair die: 6 (2:5 + 1:5 + 0:5 + 0:5 + 1:5 + 2:5) = 1:5. • is mathematically slightly inconvenient, because (among other things) the function x 7! jxj is not differentiable at zero.
    [Show full text]
  • Arxiv:2106.03979V1 [Stat.AP] 7 Jun 2021
    Received: Added at production Revised: Added at production Accepted: Added at production DOI: xxx/xxxx RESEARCH ARTICLE Scalar on time-by-distribution regression and its application for modelling associations between daily-living physical activity and cognitive functions in Alzheimer’s Disease Rahul Ghosal*1 | Vijay R. Varma2 | Dmitri Volfson3 | Jacek Urbanek4 | Jeffrey M. Hausdorff5,7,8 | Amber Watts6 | Vadim Zipunnikov1 1Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Summary Baltimore, Maryland USA Wearable data is a rich source of information that can provide deeper understand- 2National Institute on Aging (NIA), National Institutes of Health (NIH), Baltimore, ing of links between human behaviours and human health. Existing modelling Maryland, USA approaches use wearable data summarized at subject level via scalar summaries 3Neuroscience Analytics, Computational Biology, Takeda, Cambridge, MA, USA using regression techniques, temporal (time-of-day) curves using functional data 4Department of Medicine, Johns Hopkins analysis (FDA), and distributions using distributional data analysis (DDA). We pro- University School of Medicine, Baltimore pose to capture temporally local distributional information in wearable data using Maryland, USA 5Center for the Study of Movement, subject-specific time-by-distribution (TD) data objects. Specifically, we propose Cognition and Mobility, Neurological scalar on time-by-distribution regression (SOTDR) to model associations between Institute, Tel Aviv Sourasky Medical scalar response of interest such as health outcomes or disease status and TD predic- Center, Tel Aviv, Israel 6Department of Psychology, University of tors. We show that TD data objects can be parsimoniously represented via a collection Kansas, Lawrence, KS, USA of time-varying L-moments that capture distributional changes over the time-of-day.
    [Show full text]
  • The GINI Coefficient: Measuring Income Distribution
    The GINI coefficient: Measuring Income Distribution NAME : SUDEEPA HALDER MTR NU. : 3098915 FB : 09 TERM : SS 2017 COURSE : INTERNATIONAL MANAGEMENT I +II INTRODUCTION "The Gini coefficient (sometimes expressed as a Gini ratio or a normalized Gini index) is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation's residents, and is the most commonly used measure of inequality of income or wealth. It was developed by the Italian statistician and sociologist Corrado Gini and published in his 1912 paper Variability and Mutability." [1] "The Gini index is the Gini coefficient expressed as a percentage, and is equal to the Gini coefficient multiplied by 100. (The Gini coefficient is equal to half of the relative mean difference.) The Gini coefficient is often used to measure income inequality. The value of Gini Coefficient ranges from 0 to 1. Here, 0 corresponds to perfect income equality (i.e. everyone has the same income) and 1 corresponds to perfect income inequality (i.e. one person has all the income, while everyone else has zero income). Normally the mean (or total) is assumed to be positive, which rules out a Gini coefficient being less than zero or negative.The Gini coefficient can also be used to measure the wealth inequality. This function requires that no one has a negative net wealth. It is also commonly used for the measurement of discriminatory power of rating systems in the credit risk management." [See also 2] Graphical Representation: "Mathematically the Gini coefficient can be usually defined based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x% of the population (see graph).
    [Show full text]
  • Statistical Normalization Methods in Interpersonal and Intertheoretic
    Statistical normalization methods in interpersonal and intertheoretic comparisons1 For a long time, economists and philosophers have puzzled over the problem of interpersonal comparisons of utility.2 As economists typically use the term, utility is a numerical representation of an individual’s preference ordering. If those preferences satisfy the von Neumann-Morgenstern axioms, then her preferences may be represented by an interval scale measurable utility function. However, as such the unit of utility for each individual is arbitrary: from individuals’ utility functions alone, there is therefore no meaning to the claim that the difference in utility between coffee and tea for Alice is twice as great as the difference in utility between beer and vodka for Bob, or for any claim that makes comparisons of differences in utility between two or more individuals. Yet it seems that we very often can make comparisons of preference-strength between people. And if we wish to endorse an aggregative theory like utilitarianism or prioritarianism or egalitarianism, combined with a preference-satisfaction account of wellbeing, then we need to be able to make such comparisons. 1 We wish to thank Stuart Armstrong, Hilary Greaves, Stefan Riedener, Bastian Stern, Christian Tarsney and Aron Vallinder for helpful discussions and comments. We are especially grateful to Max Daniel for painstakingly checking the proofs and suggesting several important improvements. 2 See Ken Binmore, “Interpersonal Comparison of Utility,” in Don Ross and Harold Kincaid, eds., The Oxford Handbook of Philosophy of Economics (Oxford: Oxford University Press, 2009), pp. 540-559 for an overview. More recently, a formally analogous problem has appeared in discussions of normative uncertainty.
    [Show full text]
  • Smoothing of Histograms
    Analysis of Uncertain Data: Smoothing of Histograms Eugene Fink Ankur Sarin Jaime G. Carbonell [email protected] [email protected] [email protected] www.cs.cmu.edu/~eugene www.cs.cmu.edu/~jgc Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, United States Abstract—We consider the problem of converting a set of numeric data points into a smoothed approximation of II. PROBLEM the underlying probability distribution. We describe a We consider the task of smoothing a probability distribution. representation of distributions by histograms with vari- Specifically, we need to develop a procedure that inputs a able-width bars, and give a greedy smoothing algorithm point set or a fine-grained histogram, and constructs a based on this representation. coarser-grained histogram. We explain the representation of Keywords—Probability distributions, smoothing, com- probability distributions in the developed system and then pression, visualization, histograms. formalize the smoothing problem. Representation: We encode an approximated probability I. INTRODUCTION density function by a histogram with variable-width bars, HEN analyzing a collection of numeric measurements, which is called a piecewise-uniform distribution and abbre- Wwe often need to represent it as a histogram that shows viated as PU-distribution. This representation is a set of dis- the underlying distribution. For example, consider the set of joint intervals, with a probability assigned to each interval, as 5000 numeric points in Figure 9(a) on the next-to-last page of shown in Figure 1 [Bardak et al., 2006a; Fu et al., 2008]. We the paper. We may convert it to a fine-grained histogram in assume that all values within a specific interval are equally likely; thus, every interval is a uniform distribution, and the Figure 9(b), to a coarser version in Figure 9(c), or to a very histogram comprises multiple such distributions.
    [Show full text]
  • Akash Sharma for the Degree of Master of Science in Industrial Engineering and Computer Science Presented on December 12
    AN ABSTRACT OF THE THESIS OF Akash Sharma for the degree of Master of Science in Industrial Engineering and Computer Science presented on December 12. 2003. Title: Effectiveness of using Two and Three-parameter Distributions in Place of "Best-fit Distributions" in Discrete Event Simulation Models of Production Lines. Abstract approved: David S. Kim This study presents the results of using common two or three-parameter "default" distributions in place of "best fit distributions" in simulations of serial production lines with finite buffers and blocking. The default distributions used instead of the best-fit distribution are chosen such that they are non-negative, unbounded, and can match either the first two moments or the first three moments of the collected data. Furthermore, the selected default distributions must be commonly available (or easily constructed from) distributions in simulation software packages. The lognormal is used as the two- parameter distribution to match the first two moments of the data. The two-level hyper- exponential and three-parameter lognormal are used as three-parameter distributions to match the first three moments of the data. To test the use of these distributions in simulations, production lines have been separated into two major classes: automated and manual. In automated systems the workstations have fixed processing times and random time between failures, and random repair times. In manual systems, the workstations are reliable but have random processing times. Results for both classes of lines show that the differences in throughput from simulations using best-fit distributions and two parameter lognormal is small in some cases and can be reduced in others by matching the first three moments of the data.
    [Show full text]
  • Chapter 4 RANDOM VARIABLES
    Chapter 4 RANDOM VARIABLES Experiments whose outcomes are numbers EXAMPLE: Select items at random from a batch of size N until the first defective item is found. Record the number of non-defective items. Sample Space: S = {0, 1, 2,...,N} The result from the experiment becomes a variable; that is, a quantity taking different values on different occasions. Because the experiment involves selection at random, we call it a random variable. Abbreviation: rv Notation: capital letter, often X. DEFINITION: The set of possible values that a random variable X can take is called the range of X. EQUIVALENCES Unstructured Random Experiment Variable E X Sample space range of X Outcome of E One possible value x for X Event Subset of range of X Event A x ∈ subset of range of X e.g., x = 3 or 2 ≤ x ≤ 4 Pr(A) Pr(X = 3), Pr(2 ≤ X ≤ 4) REMINDER: The set of possible values that a random variable (rv) X can take is called the range of X. DEFINITION: A rv X is said to be discrete if its range consists of a finite or countable number of values. Examples: based on tossing a coin repeatedly No. of H in 1st 5 tosses: {0, 1, 2, . ., 5} No. of T before first H: {0, 1, 2,....} Note: although the definition of a discrete rv allows it to take any finite or countable set of values, the values are in practice almost always integers. Probability Distributions or ‘How to describe the behaviour of a rv’ Suppose that the only values a random variable X can take are x1, x2, .
    [Show full text]
  • Is the Pareto-Lévy Law a Good Representation of Income Distributions?
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Dagsvik, John K.; Jia, Zhiyang; Vatne, Bjørn H.; Zhu, Weizhen Working Paper Is the Pareto-Lévy law a good representation of income distributions? LIS Working Paper Series, No. 568 Provided in Cooperation with: Luxembourg Income Study (LIS) Suggested Citation: Dagsvik, John K.; Jia, Zhiyang; Vatne, Bjørn H.; Zhu, Weizhen (2011) : Is the Pareto-Lévy law a good representation of income distributions?, LIS Working Paper Series, No. 568, Luxembourg Income Study (LIS), Luxembourg This Version is available at: http://hdl.handle.net/10419/95606 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu LIS Working Paper Series No. 568 Is the Pareto-Lévy Law a Good Representation of Income Distributions? John K.
    [Show full text]