Birth Cohort Effects Among US-Born Adults Born in the 1980S: Foreshadowing Future Trends in US Obesity Prevalence

Total Page:16

File Type:pdf, Size:1020Kb

Birth Cohort Effects Among US-Born Adults Born in the 1980S: Foreshadowing Future Trends in US Obesity Prevalence International Journal of Obesity (2013) 37, 448–454 & 2013 Macmillan Publishers Limited All rights reserved 0307-0565/13 www.nature.com/ijo ORIGINAL ARTICLE Birth cohort effects among US-born adults born in the 1980s: foreshadowing future trends in US obesity prevalence WR Robinson1,2, KM Keyes3, RL Utz4, CL Martin1 and Y Yang2,5 BACKGROUND: Obesity prevalence stabilized in the US in the first decade of the 2000s. However, obesity prevalence may resume increasing if younger generations are more sensitive to the obesogenic environment than older generations. METHODS: We estimated cohort effects for obesity prevalence among young adults born in the 1980s. Using data collected from the National Health and Nutrition Examination Survey between 1971 and 2008, we calculated obesity for respondents aged between 2 and 74 years. We used the median polish approach to estimate smoothed age and period trends; residual non-linear deviations from age and period trends were regressed on cohort indicator variables to estimate birth cohort effects. RESULTS: After taking into account age effects and ubiquitous secular changes, cohorts born in the 1980s had increased propensity to obesity versus those born in the late 1960s. The cohort effects were 1.18 (95% CI: 1.01, 1.07) and 1.21 (95% CI: 1.02, 1.09) for the 1979–1983 and 1984–1988 birth cohorts, respectively. The effects were especially pronounced in Black males and females but appeared absent in White males. CONCLUSIONS: Our results indicate a generational divergence of obesity prevalence. Even if age-specific obesity prevalence stabilizes in those born before the 1980s, age-specific prevalence may continue to rise in the 1980s cohorts, culminating in record-high obesity prevalence as this generation enters its ages of peak obesity prevalence. International Journal of Obesity (2013) 37, 448–454; doi:10.1038/ijo.2012.66; published online 1 May 2012 Keywords: age factors; young adult; developmental origins; epidemiology; models; statistical INTRODUCTION therefore, Faeh et al.4 concluded that overweight prevalence may The 1980s are generally regarded as the start of the US ‘obesity resume increasing as susceptible cohorts, adults currently in their epidemic’.1 During the 1980s, age-standardized obesity preva- 30s and 40s, age into their 50s and 60s, when risk of overweight is lence (body mass index (BMI) X30.0 kg m À 2) increased in the US at its highest. This type of birth-cohort analysis is not definitive, adults by 55%, from 14.5% to 22.5%. This increase occurred after however, because it failed to disentangle period effects from age- 5 two decades in which obesity prevalence was relatively stable.2 specific cohort trends. The obesity increases of the 1980s continued in the 1990s, then To forecast future trends in the US, understanding the cohort- levelled off in the 2000s.2,3 In fact, between the early and late specific obesity susceptibility of those born in the 1980s may be 5 2000s, in child and adult populations in the US, Europe and key. Individuals born during the 1980s experienced gestation and Australia, obesity prevalence either stopped increasing, or early childhood—potentially key developmental life stages for increases decelerated.3 It is unclear whether obesity prevalence obesity development—during an era of rapidly increasing obesity in the US and other countries has peaked or whether prevalence prevalence. We hypothesize that the 1980s cohorts may be will resume increasing in the future.2,3 On the one hand, obesity particularly sensitive to the obesogenic environment. For instance, prevalence may peak as anti-obesity efforts abate rising rates or as hypotheses about the developmental origins of adult obesity posit populations reach a prevalence ceiling. On the other hand, even in that in utero and early-childhood exposures to obesogenic the US, the majority of adults are not obese and may remain environments have latent biological or behavioral consequences susceptible to increasing rates of obesity. that increase a person’s susceptibility to excess weight gain into Because the causes of previous increases are not fully under- adulthood.6–9 Two previous analyses of the US data found limited stood, it is difficult to predict future obesity trends.2 Age–period– evidence of obesity risk being increased for cohorts born in the cohort analysis can improve forecasting by estimating one 1980s versus those born in the 1950s and 1960s but not the component of obesity trends: the relative susceptibility of birth 1970s.10,11 However, both studies had limited data on these birth cohorts to obesity. For example, Faeh et al.4 used a birth-cohort cohorts. The studies only assessed adult body size. Therefore, they analysis to examine whether the stabilization of overweight observed the young adults born in the early 1980s for only the prevalence in Switzerland is a temporary phenomenon. They first few years of their adult lives in the 2000s, a narrow window of estimated that the birth cohorts of the 1960s and 1970s are more time to draw conclusions about lifelong cohort-specific risk. In prone to overweight than those born between 1930 and 1959; addition, the studies only assessed trends in the US Whites and 1Department of Epidemiology, University of North Carolina at Chapel Hill, Gillings School of Global Public Health, Chapel Hill, NC, USA; 2Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; 3Department of Epidemiology, Mailman School of Public Health, Columbia University, New York, NY, USA; 4Department of Sociology, University of Utah, Salt Lake City, UT, USA and 5Department of Sociology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. Correspondence: Dr WR Robinson, Department of Epidemiology, University of North Carolina at Chapel Hill, Gillings School of Global Public Health, McGavran-Greenberg Hall, CB# 7435, Chapel Hill 27510, NC, USA. E-mail: [email protected] Received 12 January 2012; revised 7 March 2012; accepted 26 March 2012; published online 1 May 2012 Cohort effects on obesity, 1980s birth cohort WR Robinson et al 449 Blacks, failing to incorporate the experience of other US racial and polish technique.5 The median polish approach explicitly defines cohort ethnic groups. effects as interactions of age and period effects. That is, this model assumes In this paper, we estimated cohort-specific propensity to obesity that effects of environmental influences vary by age and can be meaning- for those born in the US in the 1980s. We used data on childhood fully estimated as cohort effects. obesity as well as adult obesity in order to produce estimates over To implement the median polish technique, we first created a15Â 8 contingency table of obesity prevalence. The 15 rows represent the entire lifecourse of those born in the 1980s. Because the fifteen 5-year age groups, whereas the eight columns denote eight 5-year levelling off of obesity prevalence in the 2000s was more 2 blocks of calendar time (Figure 1). The diagonals represent 22 birth pronounced in women than in men, we produced analyses cohorts. NHANES assessed obesity prevalence during periods of variable stratified by sex. Further, because obesity prevalence varies timing and duration: 1971–1975, 1976–1980, 1988–1991, 1991–1994, then significantly by racial/ethnic and sex subgroups, we also in continuous 2-year blocks from 1999–2008. Therefore, we approximated produced stratified estimates for the subgroups on which we seven synthetic 5-year period categories using the NHANES data (Figure 1). had data extending back to the 1970s: non-Hispanic Black, non- Because no NHANES data were available between 1981 and 1988, we Hispanic White and US-born Mexican–American men and women. interpolated age-specific obesity prevalences for the synthetic period 1981–1985 by averaging age-specific prevalence for the previous (1976–1980) and subsequent (1986–1990) periods. For stratified analyses MATERIALS AND METHODS of subgroups with relatively small sample sizes and low obesity prevalence, for example, Mexican–American males, some cells in the contingency Sample tables had an obesity prevalence of 0%. For identification purposes, we We analyzed data from the National Health and Nutrition Examination replaced values of 0% with 1.0% in the initial contingency tables. Survey (NHANES), a nationally representative sample of the US civilian non- Once the contingency table was complete (Table 1), we performed the institutionalized population.12–14 We included survey waves conducted median-polish method by iteratively subtracting the median prevalence value between 1971 and 2008: NHANES I (1971–1975), NHANES II (1976–1980), of each row or column from all the cells in its respective row or column. This NHANES III, phase 1 (1988–1991), NHANES III, phase 2 (1991–1994), and the process was repeated until the median values of all rows and columns continuous surveys (1999–2008), which are released in 2-year increments. equalled 0. This process removes the additive period and age effects. The NHANES uses a complex, stratified, multi-stage probability cluster sampling values that remain in the table are non-additive residuals design. We included survey weights in all analyses to correct for of the period and age effects. The median polish technique interprets these oversampling and non-response.2 residuals as the sum of cohort effects and random error. Further statistical and We limited the data set to individuals aged 2–74 years who were born conceptual details of the median polish method are given elsewhere.18,20,22 in the United States. We omitted foreign-born respondents because years Using the residuals from the contingency table, we used generalized spent in another country before immigration violate the model’s linear regression to estimate cohort effect ratios for each birth cohort. To assumption that individuals shared period and birth cohort exposures define mutually exclusive birth cohort categories, we assigned each cell in across their lifespans.15,16 We further excluded respondents who had the contingency table to a synthetic 5-year birth cohort category.
Recommended publications
  • Median Polish with Covariate on Before and After Data
    IOSR Journal of Mathematics (IOSR-JM) e-ISSN: 2278-5728, p-ISSN: 2319-765X. Volume 12, Issue 3 Ver. IV (May. - Jun. 2016), PP 64-73 www.iosrjournals.org Median Polish with Covariate on Before and After Data Ajoge I., M.B Adam, Anwar F., and A.Y., Sadiq Abstract: The method of median polish with covariate is use for verifying the relationship between before and after treatment data. The relationship is base on the yield of grain crops for both before and after data in a classification of contingency table. The main effects such as grand, column and row effects were estimated using median polish algorithm. Logarithm transformation of data before applying median polish is done to obtain reasonable results as well as finding the relationship between before and after treatment data. The data to be transformed must be positive. The results of median polish with covariate were evaluated based on the computation and findings showed that the median polish with covariate indicated a relationship between before and after treatment data. Keywords: Before and after data, Robustness, Exploratory data analysis, Median polish with covariate, Logarithmic transformation I. Introduction This paper introduce the background of the problem by using median polish model for verifying the relationship between before and after treatment data in classification of contingency table. We first developed the algorithm of median polish, followed by sweeping of columns and rows medians. This computational procedure, as noted by Tukey (1977) operates iteratively by subtracting the median of each column from each observation in that column level, and then subtracting the median from each row from this updated table.
    [Show full text]
  • BIOINFORMATICS ORIGINAL PAPER Doi:10.1093/Bioinformatics/Btm145
    Vol. 23 no. 13 2007, pages 1648–1657 BIOINFORMATICS ORIGINAL PAPER doi:10.1093/bioinformatics/btm145 Data and text mining An efficient method for the detection and elimination of systematic error in high-throughput screening Vladimir Makarenkov1,*, Pablo Zentilli1, Dmytro Kevorkov1, Andrei Gagarin1, Nathalie Malo2,3 and Robert Nadon2,4 1Department d’informatique, Universite´ du Que´ bec a` Montreal, C.P.8888, s. Centre Ville, Montreal, QC, Canada, H3C 3P8, 2McGill University and Genome Quebec Innovation Centre, 740 Dr. Penfield Ave., Montreal, QC, Canada, H3A 1A4, 3Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, 1020 Pine Av. West, Montreal, QC, Canada, H3A 1A4 and 4Department of Human Genetics, McGill University, 1205 Dr. Penfield Ave., N5/13, Montreal, QC, Canada, H3A 1B1 Received on December 7, 2006; revised on February 22, 2007; accepted on April 10, 2007 Advance Access publication April 26, 2007 Associate Editor: Jonathan Wren ABSTRACT experimental artefacts that might confound with important Motivation: High-throughput screening (HTS) is an early-stage biological or chemical effects. The description of several process in drug discovery which allows thousands of chemical methods for quality control and correction of HTS data can compounds to be tested in a single study. We report a method for be found in Brideau et al. (2003), Gagarin et al. (2006a), Gunter correcting HTS data prior to the hit selection process (i.e. selection et al. (2003), Heuer et al. (2003), Heyse (2002), Kevorkov and of active compounds). The proposed correction minimizes the Makarenkov (2005), Makarenkov et al. (2006), Malo et al. impact of systematic errors which may affect the hit selection in (2006) and Zhang et al.
    [Show full text]
  • Exploratory Data Analysis
    Copyright © 2011 SAGE Publications. Not for sale, reproduction, or distribution. 530 Data Analysis, Exploratory The ultimate quantitative extreme in textual data Klingemann, H.-D., Volkens, A., Bara, J., Budge, I., & analysis uses scaling procedures borrowed from McDonald, M. (2006). Mapping policy preferences II: item response theory methods developed originally Estimates for parties, electors, and governments in in psychometrics. Both Jon Slapin and Sven-Oliver Eastern Europe, European Union and OECD Proksch’s Poisson scaling model and Burt Monroe 1990–2003. Oxford, UK: Oxford University Press. and Ko Maeda’s similar scaling method assume Laver, M., Benoit, K., & Garry, J. (2003). Extracting that word frequencies are generated by a probabi- policy positions from political texts using words as listic function driven by the author’s position on data. American Political Science Review, 97(2), some latent scale of interest and can be used to 311–331. estimate those latent positions relative to the posi- Leites, N., Bernaut, E., & Garthoff, R. L. (1951). Politburo images of Stalin. World Politics, 3, 317–339. tions of other texts. Such methods may be applied Monroe, B., & Maeda, K. (2004). Talk’s cheap: Text- to word frequency matrixes constructed from texts based estimation of rhetorical ideal-points (Working with no human decision making of any kind. The Paper). Lansing: Michigan State University. disadvantage is that while the scaled estimates Slapin, J. B., & Proksch, S.-O. (2008). A scaling model resulting from the procedure represent relative dif- for estimating time-series party positions from texts. ferences between texts, they must be interpreted if a American Journal of Political Science, 52(3), 705–722.
    [Show full text]
  • Expression Summarization Interrogate for Each Gene, Called a Probe Set
    Expression Quantification: Affy Affymetrix Genechip is an oligonucleotide array consisting of a several perfect match (PM) and their corresponding mismatch (MM) probes that interrogate for a single gene. · PM is the exact complementary sequence of the target genetic sequence, composed of 25 base pairs · MM probe, which has the same sequence with exception that the middle base (13th) position has been reversed · There are roughly 11­20 PM/MM probe pairs that Expression summarization interrogate for each gene, called a probe set Mikhail Dozmorov Fall 2016 2/36 Affymetrix Expression Array Preprocessing Affymetrix Expression Array Preprocessing Background adjustment Normalization Remove intensity contributions from optical noise and cross­ Remove array effect, make array comparable hybridization 1. Constant or linear (MAS) · so the true measurements aren't affected by neighboring measurements 2. Rank invariant (dChip) 3. Quantile (RMA) 1. PM­MM 2. PM only 3. RMA 4. GC­RMA 3/36 4/36 Affymetrix Expression Array Preprocessing Expression Index estimates Summarization Summarization Combine probe intensities into one measure per gene · Reduce the 11­20 probe intensities on each array to a single number for gene expression. 1. MAS 4.0, MAS 5.0 · The goal is to produce a measure that will serve as an 2. Li­Wong (dChip) indicator of the level of expression of a transcript using 3. RMA the PM (and possibly MM values). 5/36 6/36 Expression Index estimates Expression Index estimates Single Chip Multiple Chip · MAS 4.0 (avgDiff): no longer recommended for use due · MBEI (Li­Wong): a multiplicative model (Model based to many flaws.
    [Show full text]
  • Median Polish Algorithm
    Introduction to Computational Chemical Biology Dr. Raimo Franke [email protected] Lecture Series Chemical Biology http://www.raimofranke.de/ Leibniz-Universität Hannover Where can you find me? • Dr. Raimo Franke Department of Chemical Biology (C-Building) Helmholtz Centre for Infection Research Inhoffenstr. 7, 38124 Braunschweig Tel.: 0531-6181-3415 Email: [email protected] Download of lecture slides: www.raimofranke.de • Research Topics - Metabolomics - Biostatistical Data Analysis of omics experiments (NGS (genome, RNAseq), Arrays, Metabolomics, Profiling) - Phenotypic Profiling of bioactive compounds with impedance measurements (xCelligence) (Peptide Synthesis, ABPP) I am looking for BSc, MSc and PhD students, feel free to contact me! Slide 2 | My journey… Slide 3 | Outline Primer on Statistical Methods Multi-parameter phenotypic profiling: using cellular effects to characterize bioactive compounds Network Pharmacology, Modeling of Signal Transduction Networks Paper Presentations: Proc Natl Acad Sci U S A. 2013 Feb 5;110(6):2336-41. doi: 10.1073/pnas.1218524110. Epub 2013 Jan 22. Antimicrobial drug resistance affects broad changes in metabolomic phenotype in addition to secondary metabolism. Derewacz DK, Goodwin CR, McNees CR, McLean JA, Bachmann BO. Slide 4 | Chemical Biology Arsenal of Methods In silico Target interactions Correlation signals Chemical Genetics & predictions Chemoproteomics Pharmacophore Biochemical assays Expression profiling Protein microarrays –based target SPR, ITC Proteomics Yeast-3-Hybrid prediction X-Ray, NMR Metabonomics Phage display Image analysis ABPP Impedance Affinity pulldown NCI60 panel Chemical probes Computational Methods in Chemical Biology • In silico –based target prediction: use molecular descriptors for in silico screening, molecular docking etc. (typical applications of Cheminformatics) • Biochemical Assays: SPR: curve fitting to determine kass und kdiss, Xray: model fitting, NMR: chemical shift prediction etc.
    [Show full text]
  • Introductory Methods for Area Data Analysis
    Analysis Methods for Area Data Introductory Methods for Values are associated with a fixed set of areal units covering the Area Data study region We assume a value has been observed for all areas Bailey and Gatrell Chapter 7 The areal units may take the form of a regular lattice or irregular units Lecture 17 November 11, 2003 Analysis Methods for Area Data Analysis Methods for Area Data Objectives For continuous data models we were more concerned with explanation of patterns in terms of locations § Not prediction - there are typically no unobserved values, the attribute is exhaustively measured. In this case explanation is in terms of covariates measured over the same units as well as in terms of the spatial § Model spatial patterns in the values associated with fixed arrangement of the areal units areas and determine possible explanations for such patterns Examples Relationship of disease rates and socio-economic variables 1 Analysis Methods for Area Data Analysis Methods for Area Data {Y (s), s Î R} Random variable Y indexed by locations Explore these attribute values in the context of global trend or first order variation and second order variation – the Random variable Y indexed by a fixed spatial arrangement of the set of areas {Y ( Ai ), ?i Î R} set of areal units § First order variation as variation in the of mean, mi of Yi A1 ÈL È An = R The set of areal units cover the study region R § Second order variation as variation in the COV(Yi , Yj ) Conceive of this sample as a sample from a super population – all realizations of the process over these areas that might ever occur.
    [Show full text]
  • Generalised Median Polish Based on Additive Generators
    Generalised Median Polish Based on Additive Generators Balasubramaniam Jayaram1 and Frank Klawonn2;3 Abstract Contingency tables often arise from collecting patient data and from lab experiments. A typical question to be answered based on a con- tingency table is whether the rows or the columns show a signi¯cant di®er- ence. Median Polish (MP) is fast becoming a prefered way to analyse con- tingency tables based on a simple additive model. Often, the data need to be transformed before applying the MP algorithm to get better results. A common transformation is the logarithm which essentially changes the un- derlying model to a multiplicative model. In this work, we propose a novel way of applying the MP algorithm with generalised transformations that still gives reasonable results. Our approach to the underlying model leads us to transformations that are similar to additive generators of some fuzzy logic connectives. In fact, we illustrate how to choose the best transformation that give meaningful results by proposing some modi¯ed additive generators of uninorms. In this way, MP is generalied from the simple additive model to more general nonlinear connectives. The recently proposed way of identifying a suitable power transformation based on IQRoQ plots [1] also plays a central role in this work. 1 Introduction Contingency tables often arise from collecting patient data and from lab experiments. The rows and columns of a contingency table correspond to two 1Department of Mathematics, Indian Institute of Technology Hyderabad, Yeddu- mailaram - 502 205, India [email protected] ¢2Department of Computer Science, Ostfalia University of Applied Sciences, Salzdahlumer Str.
    [Show full text]
  • A New Statistical Model for Analyzing Rating Scale Data Pertaining to Word Meaning
    Psychological Research DOI 10.1007/s00426-017-0864-8 ORIGINAL ARTICLE A new statistical model for analyzing rating scale data pertaining to word meaning 1,2,3 4 1,2,5 Felipe Munoz-Rubke • Karen Kafadar • Karin H. James Received: 28 October 2016 / Accepted: 31 March 2017 Ó Springer-Verlag Berlin Heidelberg 2017 Abstract The concrete-abstract categorization scheme has Imageability, and Multisensory. We analyzed the data guided several research programs. A popular way to clas- using both two-way and three-way MPA models. We also sify words into one of these categories is to calculate a calculated 95% CIs for the two-way models. Categorizing word’s mean value in a Concreteness or Imageability rating words with the Action scale revealed a continuum of word scale. However, this procedure has several limitations. For meaning for both nouns and verbs. The remaining scales instance, results can be highly distorted by outliers, ascribe produced dichotomous or stratified results for nouns, and differences among words when none may exist, and neglect continuous results for verbs. While the sample mean rating trends in participants. We suggest using an alterna- analysis generated continua irrespective of the rating scale, tive procedure to analyze rating scale data called median MPA differentiated among dichotomies and continua. We polish analysis (MPA). MPA is tolerant to outliers and conclude that MPA allowed us to better classify words by accounts for information in multiple dimensions, including discarding outliers, focusing on main trends, and consid- trends among participants. MPA performance can be ering the differences in rating criteria among participants.
    [Show full text]
  • Statistical Analysis of Some Gas Chromatography Measurements
    JOURNAL OF RESEARCHof the National Bureau of Standards Vol. 88, No. x, January-February 1983 Statistical Analysis of Some Gas Chromatography Measurements Karen Kafadar* and Keith R. Eberhardt* National Bureau of Standards, Washington, DC 20234 November 3, 1982 The National Bureau of Standards has certified Standard Reference Materials (SRMs) for the concentration of polychlorinated biphenyls (PCBs) in hydrocarbon matrices (transformer and motor oils). The certification of these SRMs involved measurements of extremely small concentrations of PCBs made by gas chromatography. Despite the high accuracy of the measurement techmique, the correlated data cannot he analyzed in a routine independent manner. A linear model for the measurements is described; its complexity encourages the use of simpler exploratory methods which reveal unexpected features and point the way towards obtaining valid statistical summaries of the data. Key words: exploratory analysis; linear models; median polish; robust estimates; statistical methods; uncertainty statement. 1. Introduction 2. The Data Exploratory methods in data analysis are used in many 2.1 Description of the SRM for PCBs in Oils fields of application. These methods are typically used on messy data because they are robust in nature; that is, they The National Bureau of Standards has certified are insensitive to unexpected departures from an assumed Standard Reference Materials (SRMs) for the model (e.g. outliers, non-normality). However, they can concentration of polychlorinated biphenyls (PCBs) in also provide valuable insight even for so-called "clean" hydrocarbon matrices, namely, transformer and motor data. This paper discusses an example of a measurement oils. PCBs are toxic contaminants; their chemical and process for which an exploratory approach can reveal thermal stability makes them commercially useful but also particularly interesting or unexpected trends even in leads to their persistence in the environment.
    [Show full text]
  • Using Approximations to Scale Exploratory Data Analysis In
    Using Approximations to Scale Exploratory Data Analysis in Datacub es Daniel Barbar a Xintao Wu George Mason University Information and Software Engineering Department Fairfax, VA 22303 dbarbara,[email protected] March 4, 1999 Pap er 188 Abstract Exploratory Data Analysis is a widely used technique to determine which factors have the most in uence on data values in a multi-way table, or which cells in the table can be considered anomalous with resp ect to the other cells. In particular, median p olish is a simple, yet robust metho d to p erform Exploratory Data Analysis. Median p olish is resistant to holes in the table cells that have no values, but it may require a lot of iterations through the data. This factor makes it dicult to apply median p olish to large multidimensional tables, since the I/O requirements may b e prohibitive. This pap er describ es a technique that uses median p olish over an approximation of a datacub e, easing the burden of I/O. The results obtained are tested for quality, using a variety of measures. The technique scales to large datacub es and proves to giveagood approximation of the results that would have b een obtained by median p olish in the original data. 1 Intro duction Exploratory Data Analysis EDA is a technique [13, 14, 22] that uncovers structure in data. EDA is p erformed without any a-priori hyp othesis in mind: rather it searches for \exceptions" of the data values relative to what those values would have b een if anticipated by an statistical mo del.
    [Show full text]
  • Universiti Putra Malaysia Median Polish Techniques
    UNIVERSITI PUTRA MALAYSIA MEDIAN POLISH TECHNIQUES FOR ANALYSING PAIRED DATA UPM IDRIS AJOGE COPYRIGHT © FS 2017 3 MEDIAN POLISH TECHNIQUES FOR ANALYSING PAIRED DATA UPM By IDRIS AJOGE COPYRIGHTThesis Submitted to the School of Graduate Studies, Universiti Putra Malaysia, in Fulfilment of the Requirements for the Degree of © Master of Science January 2017 UPM COPYRIGHT © COPYRIGHT All material contained within the thesis, including without limitation text, logos, icons, photographs and all other artwork, is copyright material of Universiti Putra Malaysia unless otherwise stated. Use may be made of any material contained within the thesis for non-commercial purposes from the copyright holder. Commercial use of material may only be made with the express, prior, written permission of Universiti Putra Malaysia. Copyright ©Universiti Putra Malaysia UPM COPYRIGHT © iii DEDICATIONS I would like to dedicate this dissertion work to • Almighty Allah, the benevolent and the merciful. • My respectful parents who have taught me a lot on the persistency in life. • My beloved wife for all her contribution, patience and understandings through- out my studies. She incredibly supported me and made it all possible for me. • My chidren for their patience and endurance and their love have always been my greatest inspiration. • My late younger brother Aminu S. Ajoge, whom I lost during the course ofUPM my study COPYRIGHT © Abstract of thesis presented to the Senate of Universiti Putra Malaysia in fulfilment of the requirement for the degree of Master of Science MEDIAN POLISH TECHNIQUES FOR ANALYSING PAIRED DATA By IDRIS AJOGE January 2017 Chairman: Mohd Bakri Adam, PhD UPM Faculty: Science Median polish is used as a data analysis technique for examining the significance of various factors in single or multi-way models.
    [Show full text]
  • Using Weighted Regression Model for Estimating Cohort Effect in Age- Period Contingency Table Data
    www.oncotarget.com Oncotarget, 2018, Vol. 9, (No. 28), pp: 19826-19835 Research Paper Using weighted regression model for estimating cohort effect in age- period contingency table data I-Shiang Tzeng1,2,3, Chau Yee Ng4,5, Jau-Yuan Chen6,7, Li-Shya Chen8 and Chin- Chieh Wu9 1Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan 2Department of Statistics, National Taipei University, Taipei, Taiwan 3Bachelor's Program of Financial Engineering and Actuarial Science, Feng Chia University, Taichung City, Taiwan 4Department of Dermatology, Drug Hypersensitivity Clinical and Research Center, Chang Gung Memorial Hospital, Taipei, Taiwan 5School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan 6Department of Family Medicine, Chang-Gung Memorial Hospital, Linkou Branch, Taoyuan City, Taiwan 7College of Medicine, Chang Gung University, Taoyuan, Taiwan 8Department of Statistics, National Chengchi University, Taipei, Taiwan 9Department of Emergency Medicine, Chang Gung Memorial Hospital, Keelung, Taiwan Correspondence to: I-Shiang Tzeng, email: [email protected] Keywords: age-period-cohort; multiphase method; prediction; hepatocellular carcinoma Received: May 18, 2017 Accepted: February 22, 2018 Published: April 13, 2018 Copyright: Tzeng et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 (CC BY 3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. ABSTRACT Background: Recently, the multiphase method was proposed to estimate cohort effects after removing the effects of age and period in age-period contingency table data. Hepatocellular carcinoma (HCC) is the most common primary malignancy of the liver and is strongly associated with cirrhosis, due to both alcohol and viral etiologies.
    [Show full text]