Introductory Methods for Area Data Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Introductory Methods for Area Data Analysis Analysis Methods for Area Data Introductory Methods for Values are associated with a fixed set of areal units covering the Area Data study region We assume a value has been observed for all areas Bailey and Gatrell Chapter 7 The areal units may take the form of a regular lattice or irregular units Lecture 17 November 11, 2003 Analysis Methods for Area Data Analysis Methods for Area Data Objectives For continuous data models we were more concerned with explanation of patterns in terms of locations § Not prediction - there are typically no unobserved values, the attribute is exhaustively measured. In this case explanation is in terms of covariates measured over the same units as well as in terms of the spatial § Model spatial patterns in the values associated with fixed arrangement of the areal units areas and determine possible explanations for such patterns Examples Relationship of disease rates and socio-economic variables 1 Analysis Methods for Area Data Analysis Methods for Area Data {Y (s), s Î R} Random variable Y indexed by locations Explore these attribute values in the context of global trend or first order variation and second order variation – the Random variable Y indexed by a fixed spatial arrangement of the set of areas {Y ( Ai ), ?i Î R} set of areal units § First order variation as variation in the of mean, mi of Yi A1 ÈL È An = R The set of areal units cover the study region R § Second order variation as variation in the COV(Yi , Yj ) Conceive of this sample as a sample from a super population – all realizations of the process over these areas that might ever occur. The sample observations are one possible realization Visualizing Area Data Visualizing Area Data Can use proportional symbols superimposed over the areal units Class intervals compress attribute ranges into a relatively small number of discrete values. Symbols are proportionate to the attribute value of the area Number of classes Can use choropleth maps – each area is shaded according the Should be a function of the range of data variability the attribute value associated with the areal unit Limited by human perception to ~8 classes § Attribute of interest is scaled to a set of discrete ranges or classes Rule of thumb: # classes = 1 + 3.3 log n § Each zone is shaded or colored according to its attribute value 5 observations = 3 classes 20 observations = 6 classes 40 observations = 7 classes 200 observations = 8 classes 1500 observations = 11 classes 2 Visualizing Area Data Visualizing Area Data Visual outcome depends heavily on class interval choice, color Same class intervals as applied to continuous data and shading § Equal Intervals - for fairly uniformly distributed data Use of discrete classes to assign colors can give false impressions of both uniformity (within units) and discontinuity (between units) § Trimmed equal intervals - handles a few outliers from a uniform distribution § Percentiles - rank data and get x evenly distributed classes of width 1/x § Quartile map - 4 classes, lowest quartile, second Equal Intervals Quantiles quartile, third, and highest § Standard Deviates - divide data into units of standard deviation around the mean Visualizing Area Data Visualizing Area Data Problems with choropleth maps Arbitrariness of the areal units Natural Breaks Standard deviates What was the origin of areal unit boundaries? Designed for convenience or efficiency rather than reflection of the underlying spatial pattern – most enumeration units Typically referred to as modifiable area unit problem - MAUP 3 Modifiable Area Unit Problem Visualizing Area Data Different zones will produce virtually any numbers from the same underlying distribution Problems with choropleth maps mean 3.75 var 0.50 mean 3.75 var 0.00 Original data (individuals § Large areas tend to dominate the information content of the map living in households Mean 3.75 var 2.6 § Area shading of attribute values means two visual variables apply to the data – color and area mean 3.75 var 0.93 mean 3.17 var 2.11 § Should not map absolute numbers with a choropleth map § With absolute counts there is no relation of the attribute variable to the absolute and/or relative size of the spatial units § Choropleth maps are not appropriate for counts unless the data are corrected for area, population or some other measure Source Jelenski and Wu (1996) "The modifiable area unit problem and implications that factors out the size of the enumeration districts. for landscape ecology". Landscape Ecology 11(3) p. 129 -140. Visualizing Area Data Map of murder rates per 100,000 US and Canada n The problem of class limits § Use of continuous shading n Area dependence problem § Correct for population or area § Transform areas to be proportionate to the attribute value 4 Corrects for Cartogram - Attempt to correct for area or population population Cartograms There are two distinct and conflicting goals in the construction of cartograms: adjusting region sizes retaining region shapes 5 Cartograms Cartograms Relaxation of contiguity and shape constraints A non-continuous population cartogram based on the population values for the census zones of Leicestershire created using Dorling's 'circle growing' algorithm. Both the circle areas and shades are proportionate to the zone populations. Comparison of traditional choropleth map with cartogram showing percent of male Cartograms population of working age (1891) Cartograms represent areas in relation to their population size. The traditional map gives the impression that the majority of areas have approximately 56% (yellow, brown) of the male population of working age. Patterns are displayed in relation to the number of people involved instead of the size of the area involved. The cartogram gives a different overall view - the majority of areas have a male population of working age of over 60% (red and purple on the map). 6 Exploring Area Data Proximity Measures ì1 centroid of Aj is one of k nearest centroids to that of Ai n Approaches for examining mean values over the areal units wij í î0 otherwise n Techniques for exploring spatial dependence ì1 centroid of Aj is within distance d of centroids of Ai wij í î0 otherwise What was essential information for exploration of these ì1 Aj shares a boundary with Ai with point and continuous data? wij í î0 otherwise g if intercentroid distance d < d Need measures of proximity for irregular areal units ìd ij ij wij í î 0 otherwise Proximity Measures Proximity Measures A A A A A A A 1 2 3 4 5 6 7 A1 A2 A3 A4 A5 A6 A7 A A2 1 æ0 1 1 0 0 1 1ö æ 0 1 1 0 0 0 1ö A4 ç ÷ A ç ÷ A 0 1 1 0 0 0 2 A4 1 0 1 1 0 0 0 2 ç ÷ ç ÷ A1 A3 ç ÷ ç ÷ A3 0 1 1 1 0 A A3 0 1 0 1 1 0 0 ç ÷ 1 ç ÷ A5 W = A ç 0 1 0 0÷ W =ç 0 1 1 0 1 0 0÷ 4 A5 A7 A6 ç ÷ ç ÷ A 0 1 0 A7 0 0 1 1 0 1 0 5 ç ÷ A6 ç ÷ A ç 0 1÷ ç 0 0 1 0 1 0 1÷ 6 ç ÷ ç ÷ 0 1 1 0 0 0 1 0 A7 è ø è ø Adjacency k nearest centroids The spatial proximity matrix W with elements wij represents a Not symmetric measure of proximity of Ai to Aj 7 Proximity Measures Spatial Moving Averages Proximity measures can be specified as measures of different One way to estimate the mean is by the average of the orders – spatial lags values in “neighboring” areas The proximity matrix W represents the neighbors These can be represented as different proximity matrices for different lags n å wij y j For example W represents first spatial lag, W the j=1 1 2 mˆi = second spatial lag, etc n å wij j=1 Median Polish If areal units are a regular grid can use median polish to estimate the trend Median is more resistant to outliers in the data than the mean Each value is decomposed into: yij = m + mi + m j + eij where m is a fixed overall effect mi and mj represent fixed row and column effects eij is a random error term The overall mean is mij = m + mi + m j 8 Median Polish algorithm Median Polish algorithm 5. Repeat steps 1-4 until no changes occur with the row or column 1. Take the median of each row and record the value to the side of medians the row – subtract the row median from each value in that row 1) s+1 3a) s+1 3 4 5 -1 0 1 4 -1 0 1 -1 2. Compute the median of the row medians , and record the value 0 -1 1 5 0 -1 1 0 0 1 0 5 0 1 0 0 as the overall effect, Subtract the overall effect from each of the 5 4 6 r+1 0 0 0 0 r+1 0 0 1 5 row medians 5 6 5 3b) s+1 2a) s+1 -1 0 0 -1 3. Take the median of each column and record the value beneath -1 0 1 4 1 2 3 s+1 0 -1 0 0 0 -1 1 5 the column, Subtract the column median from each value in that 1 3 4 5 0 0 1 -1 0 0 1 0 5 r+1 0 0 1 5 particular column 2 5 4 6 0 3 5 6 5 0 r+1 0 0 0 5 r+1 0 0 0 0 4. Compute the median of the column medians, and add the values 2b) s+1 4a) s+1 -1 0 1 -1 -1 0 0 0 to the current overall effect.
Recommended publications
  • Birth Cohort Effects Among US-Born Adults Born in the 1980S: Foreshadowing Future Trends in US Obesity Prevalence
    International Journal of Obesity (2013) 37, 448–454 & 2013 Macmillan Publishers Limited All rights reserved 0307-0565/13 www.nature.com/ijo ORIGINAL ARTICLE Birth cohort effects among US-born adults born in the 1980s: foreshadowing future trends in US obesity prevalence WR Robinson1,2, KM Keyes3, RL Utz4, CL Martin1 and Y Yang2,5 BACKGROUND: Obesity prevalence stabilized in the US in the first decade of the 2000s. However, obesity prevalence may resume increasing if younger generations are more sensitive to the obesogenic environment than older generations. METHODS: We estimated cohort effects for obesity prevalence among young adults born in the 1980s. Using data collected from the National Health and Nutrition Examination Survey between 1971 and 2008, we calculated obesity for respondents aged between 2 and 74 years. We used the median polish approach to estimate smoothed age and period trends; residual non-linear deviations from age and period trends were regressed on cohort indicator variables to estimate birth cohort effects. RESULTS: After taking into account age effects and ubiquitous secular changes, cohorts born in the 1980s had increased propensity to obesity versus those born in the late 1960s. The cohort effects were 1.18 (95% CI: 1.01, 1.07) and 1.21 (95% CI: 1.02, 1.09) for the 1979–1983 and 1984–1988 birth cohorts, respectively. The effects were especially pronounced in Black males and females but appeared absent in White males. CONCLUSIONS: Our results indicate a generational divergence of obesity prevalence. Even if age-specific obesity prevalence stabilizes in those born before the 1980s, age-specific prevalence may continue to rise in the 1980s cohorts, culminating in record-high obesity prevalence as this generation enters its ages of peak obesity prevalence.
    [Show full text]
  • Median Polish with Covariate on Before and After Data
    IOSR Journal of Mathematics (IOSR-JM) e-ISSN: 2278-5728, p-ISSN: 2319-765X. Volume 12, Issue 3 Ver. IV (May. - Jun. 2016), PP 64-73 www.iosrjournals.org Median Polish with Covariate on Before and After Data Ajoge I., M.B Adam, Anwar F., and A.Y., Sadiq Abstract: The method of median polish with covariate is use for verifying the relationship between before and after treatment data. The relationship is base on the yield of grain crops for both before and after data in a classification of contingency table. The main effects such as grand, column and row effects were estimated using median polish algorithm. Logarithm transformation of data before applying median polish is done to obtain reasonable results as well as finding the relationship between before and after treatment data. The data to be transformed must be positive. The results of median polish with covariate were evaluated based on the computation and findings showed that the median polish with covariate indicated a relationship between before and after treatment data. Keywords: Before and after data, Robustness, Exploratory data analysis, Median polish with covariate, Logarithmic transformation I. Introduction This paper introduce the background of the problem by using median polish model for verifying the relationship between before and after treatment data in classification of contingency table. We first developed the algorithm of median polish, followed by sweeping of columns and rows medians. This computational procedure, as noted by Tukey (1977) operates iteratively by subtracting the median of each column from each observation in that column level, and then subtracting the median from each row from this updated table.
    [Show full text]
  • BIOINFORMATICS ORIGINAL PAPER Doi:10.1093/Bioinformatics/Btm145
    Vol. 23 no. 13 2007, pages 1648–1657 BIOINFORMATICS ORIGINAL PAPER doi:10.1093/bioinformatics/btm145 Data and text mining An efficient method for the detection and elimination of systematic error in high-throughput screening Vladimir Makarenkov1,*, Pablo Zentilli1, Dmytro Kevorkov1, Andrei Gagarin1, Nathalie Malo2,3 and Robert Nadon2,4 1Department d’informatique, Universite´ du Que´ bec a` Montreal, C.P.8888, s. Centre Ville, Montreal, QC, Canada, H3C 3P8, 2McGill University and Genome Quebec Innovation Centre, 740 Dr. Penfield Ave., Montreal, QC, Canada, H3A 1A4, 3Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, 1020 Pine Av. West, Montreal, QC, Canada, H3A 1A4 and 4Department of Human Genetics, McGill University, 1205 Dr. Penfield Ave., N5/13, Montreal, QC, Canada, H3A 1B1 Received on December 7, 2006; revised on February 22, 2007; accepted on April 10, 2007 Advance Access publication April 26, 2007 Associate Editor: Jonathan Wren ABSTRACT experimental artefacts that might confound with important Motivation: High-throughput screening (HTS) is an early-stage biological or chemical effects. The description of several process in drug discovery which allows thousands of chemical methods for quality control and correction of HTS data can compounds to be tested in a single study. We report a method for be found in Brideau et al. (2003), Gagarin et al. (2006a), Gunter correcting HTS data prior to the hit selection process (i.e. selection et al. (2003), Heuer et al. (2003), Heyse (2002), Kevorkov and of active compounds). The proposed correction minimizes the Makarenkov (2005), Makarenkov et al. (2006), Malo et al. impact of systematic errors which may affect the hit selection in (2006) and Zhang et al.
    [Show full text]
  • Exploratory Data Analysis
    Copyright © 2011 SAGE Publications. Not for sale, reproduction, or distribution. 530 Data Analysis, Exploratory The ultimate quantitative extreme in textual data Klingemann, H.-D., Volkens, A., Bara, J., Budge, I., & analysis uses scaling procedures borrowed from McDonald, M. (2006). Mapping policy preferences II: item response theory methods developed originally Estimates for parties, electors, and governments in in psychometrics. Both Jon Slapin and Sven-Oliver Eastern Europe, European Union and OECD Proksch’s Poisson scaling model and Burt Monroe 1990–2003. Oxford, UK: Oxford University Press. and Ko Maeda’s similar scaling method assume Laver, M., Benoit, K., & Garry, J. (2003). Extracting that word frequencies are generated by a probabi- policy positions from political texts using words as listic function driven by the author’s position on data. American Political Science Review, 97(2), some latent scale of interest and can be used to 311–331. estimate those latent positions relative to the posi- Leites, N., Bernaut, E., & Garthoff, R. L. (1951). Politburo images of Stalin. World Politics, 3, 317–339. tions of other texts. Such methods may be applied Monroe, B., & Maeda, K. (2004). Talk’s cheap: Text- to word frequency matrixes constructed from texts based estimation of rhetorical ideal-points (Working with no human decision making of any kind. The Paper). Lansing: Michigan State University. disadvantage is that while the scaled estimates Slapin, J. B., & Proksch, S.-O. (2008). A scaling model resulting from the procedure represent relative dif- for estimating time-series party positions from texts. ferences between texts, they must be interpreted if a American Journal of Political Science, 52(3), 705–722.
    [Show full text]
  • Expression Summarization Interrogate for Each Gene, Called a Probe Set
    Expression Quantification: Affy Affymetrix Genechip is an oligonucleotide array consisting of a several perfect match (PM) and their corresponding mismatch (MM) probes that interrogate for a single gene. · PM is the exact complementary sequence of the target genetic sequence, composed of 25 base pairs · MM probe, which has the same sequence with exception that the middle base (13th) position has been reversed · There are roughly 11­20 PM/MM probe pairs that Expression summarization interrogate for each gene, called a probe set Mikhail Dozmorov Fall 2016 2/36 Affymetrix Expression Array Preprocessing Affymetrix Expression Array Preprocessing Background adjustment Normalization Remove intensity contributions from optical noise and cross­ Remove array effect, make array comparable hybridization 1. Constant or linear (MAS) · so the true measurements aren't affected by neighboring measurements 2. Rank invariant (dChip) 3. Quantile (RMA) 1. PM­MM 2. PM only 3. RMA 4. GC­RMA 3/36 4/36 Affymetrix Expression Array Preprocessing Expression Index estimates Summarization Summarization Combine probe intensities into one measure per gene · Reduce the 11­20 probe intensities on each array to a single number for gene expression. 1. MAS 4.0, MAS 5.0 · The goal is to produce a measure that will serve as an 2. Li­Wong (dChip) indicator of the level of expression of a transcript using 3. RMA the PM (and possibly MM values). 5/36 6/36 Expression Index estimates Expression Index estimates Single Chip Multiple Chip · MAS 4.0 (avgDiff): no longer recommended for use due · MBEI (Li­Wong): a multiplicative model (Model based to many flaws.
    [Show full text]
  • Median Polish Algorithm
    Introduction to Computational Chemical Biology Dr. Raimo Franke [email protected] Lecture Series Chemical Biology http://www.raimofranke.de/ Leibniz-Universität Hannover Where can you find me? • Dr. Raimo Franke Department of Chemical Biology (C-Building) Helmholtz Centre for Infection Research Inhoffenstr. 7, 38124 Braunschweig Tel.: 0531-6181-3415 Email: [email protected] Download of lecture slides: www.raimofranke.de • Research Topics - Metabolomics - Biostatistical Data Analysis of omics experiments (NGS (genome, RNAseq), Arrays, Metabolomics, Profiling) - Phenotypic Profiling of bioactive compounds with impedance measurements (xCelligence) (Peptide Synthesis, ABPP) I am looking for BSc, MSc and PhD students, feel free to contact me! Slide 2 | My journey… Slide 3 | Outline Primer on Statistical Methods Multi-parameter phenotypic profiling: using cellular effects to characterize bioactive compounds Network Pharmacology, Modeling of Signal Transduction Networks Paper Presentations: Proc Natl Acad Sci U S A. 2013 Feb 5;110(6):2336-41. doi: 10.1073/pnas.1218524110. Epub 2013 Jan 22. Antimicrobial drug resistance affects broad changes in metabolomic phenotype in addition to secondary metabolism. Derewacz DK, Goodwin CR, McNees CR, McLean JA, Bachmann BO. Slide 4 | Chemical Biology Arsenal of Methods In silico Target interactions Correlation signals Chemical Genetics & predictions Chemoproteomics Pharmacophore Biochemical assays Expression profiling Protein microarrays –based target SPR, ITC Proteomics Yeast-3-Hybrid prediction X-Ray, NMR Metabonomics Phage display Image analysis ABPP Impedance Affinity pulldown NCI60 panel Chemical probes Computational Methods in Chemical Biology • In silico –based target prediction: use molecular descriptors for in silico screening, molecular docking etc. (typical applications of Cheminformatics) • Biochemical Assays: SPR: curve fitting to determine kass und kdiss, Xray: model fitting, NMR: chemical shift prediction etc.
    [Show full text]
  • Generalised Median Polish Based on Additive Generators
    Generalised Median Polish Based on Additive Generators Balasubramaniam Jayaram1 and Frank Klawonn2;3 Abstract Contingency tables often arise from collecting patient data and from lab experiments. A typical question to be answered based on a con- tingency table is whether the rows or the columns show a signi¯cant di®er- ence. Median Polish (MP) is fast becoming a prefered way to analyse con- tingency tables based on a simple additive model. Often, the data need to be transformed before applying the MP algorithm to get better results. A common transformation is the logarithm which essentially changes the un- derlying model to a multiplicative model. In this work, we propose a novel way of applying the MP algorithm with generalised transformations that still gives reasonable results. Our approach to the underlying model leads us to transformations that are similar to additive generators of some fuzzy logic connectives. In fact, we illustrate how to choose the best transformation that give meaningful results by proposing some modi¯ed additive generators of uninorms. In this way, MP is generalied from the simple additive model to more general nonlinear connectives. The recently proposed way of identifying a suitable power transformation based on IQRoQ plots [1] also plays a central role in this work. 1 Introduction Contingency tables often arise from collecting patient data and from lab experiments. The rows and columns of a contingency table correspond to two 1Department of Mathematics, Indian Institute of Technology Hyderabad, Yeddu- mailaram - 502 205, India [email protected] ¢2Department of Computer Science, Ostfalia University of Applied Sciences, Salzdahlumer Str.
    [Show full text]
  • A New Statistical Model for Analyzing Rating Scale Data Pertaining to Word Meaning
    Psychological Research DOI 10.1007/s00426-017-0864-8 ORIGINAL ARTICLE A new statistical model for analyzing rating scale data pertaining to word meaning 1,2,3 4 1,2,5 Felipe Munoz-Rubke • Karen Kafadar • Karin H. James Received: 28 October 2016 / Accepted: 31 March 2017 Ó Springer-Verlag Berlin Heidelberg 2017 Abstract The concrete-abstract categorization scheme has Imageability, and Multisensory. We analyzed the data guided several research programs. A popular way to clas- using both two-way and three-way MPA models. We also sify words into one of these categories is to calculate a calculated 95% CIs for the two-way models. Categorizing word’s mean value in a Concreteness or Imageability rating words with the Action scale revealed a continuum of word scale. However, this procedure has several limitations. For meaning for both nouns and verbs. The remaining scales instance, results can be highly distorted by outliers, ascribe produced dichotomous or stratified results for nouns, and differences among words when none may exist, and neglect continuous results for verbs. While the sample mean rating trends in participants. We suggest using an alterna- analysis generated continua irrespective of the rating scale, tive procedure to analyze rating scale data called median MPA differentiated among dichotomies and continua. We polish analysis (MPA). MPA is tolerant to outliers and conclude that MPA allowed us to better classify words by accounts for information in multiple dimensions, including discarding outliers, focusing on main trends, and consid- trends among participants. MPA performance can be ering the differences in rating criteria among participants.
    [Show full text]
  • Statistical Analysis of Some Gas Chromatography Measurements
    JOURNAL OF RESEARCHof the National Bureau of Standards Vol. 88, No. x, January-February 1983 Statistical Analysis of Some Gas Chromatography Measurements Karen Kafadar* and Keith R. Eberhardt* National Bureau of Standards, Washington, DC 20234 November 3, 1982 The National Bureau of Standards has certified Standard Reference Materials (SRMs) for the concentration of polychlorinated biphenyls (PCBs) in hydrocarbon matrices (transformer and motor oils). The certification of these SRMs involved measurements of extremely small concentrations of PCBs made by gas chromatography. Despite the high accuracy of the measurement techmique, the correlated data cannot he analyzed in a routine independent manner. A linear model for the measurements is described; its complexity encourages the use of simpler exploratory methods which reveal unexpected features and point the way towards obtaining valid statistical summaries of the data. Key words: exploratory analysis; linear models; median polish; robust estimates; statistical methods; uncertainty statement. 1. Introduction 2. The Data Exploratory methods in data analysis are used in many 2.1 Description of the SRM for PCBs in Oils fields of application. These methods are typically used on messy data because they are robust in nature; that is, they The National Bureau of Standards has certified are insensitive to unexpected departures from an assumed Standard Reference Materials (SRMs) for the model (e.g. outliers, non-normality). However, they can concentration of polychlorinated biphenyls (PCBs) in also provide valuable insight even for so-called "clean" hydrocarbon matrices, namely, transformer and motor data. This paper discusses an example of a measurement oils. PCBs are toxic contaminants; their chemical and process for which an exploratory approach can reveal thermal stability makes them commercially useful but also particularly interesting or unexpected trends even in leads to their persistence in the environment.
    [Show full text]
  • Using Approximations to Scale Exploratory Data Analysis In
    Using Approximations to Scale Exploratory Data Analysis in Datacub es Daniel Barbar a Xintao Wu George Mason University Information and Software Engineering Department Fairfax, VA 22303 dbarbara,[email protected] March 4, 1999 Pap er 188 Abstract Exploratory Data Analysis is a widely used technique to determine which factors have the most in uence on data values in a multi-way table, or which cells in the table can be considered anomalous with resp ect to the other cells. In particular, median p olish is a simple, yet robust metho d to p erform Exploratory Data Analysis. Median p olish is resistant to holes in the table cells that have no values, but it may require a lot of iterations through the data. This factor makes it dicult to apply median p olish to large multidimensional tables, since the I/O requirements may b e prohibitive. This pap er describ es a technique that uses median p olish over an approximation of a datacub e, easing the burden of I/O. The results obtained are tested for quality, using a variety of measures. The technique scales to large datacub es and proves to giveagood approximation of the results that would have b een obtained by median p olish in the original data. 1 Intro duction Exploratory Data Analysis EDA is a technique [13, 14, 22] that uncovers structure in data. EDA is p erformed without any a-priori hyp othesis in mind: rather it searches for \exceptions" of the data values relative to what those values would have b een if anticipated by an statistical mo del.
    [Show full text]
  • Universiti Putra Malaysia Median Polish Techniques
    UNIVERSITI PUTRA MALAYSIA MEDIAN POLISH TECHNIQUES FOR ANALYSING PAIRED DATA UPM IDRIS AJOGE COPYRIGHT © FS 2017 3 MEDIAN POLISH TECHNIQUES FOR ANALYSING PAIRED DATA UPM By IDRIS AJOGE COPYRIGHTThesis Submitted to the School of Graduate Studies, Universiti Putra Malaysia, in Fulfilment of the Requirements for the Degree of © Master of Science January 2017 UPM COPYRIGHT © COPYRIGHT All material contained within the thesis, including without limitation text, logos, icons, photographs and all other artwork, is copyright material of Universiti Putra Malaysia unless otherwise stated. Use may be made of any material contained within the thesis for non-commercial purposes from the copyright holder. Commercial use of material may only be made with the express, prior, written permission of Universiti Putra Malaysia. Copyright ©Universiti Putra Malaysia UPM COPYRIGHT © iii DEDICATIONS I would like to dedicate this dissertion work to • Almighty Allah, the benevolent and the merciful. • My respectful parents who have taught me a lot on the persistency in life. • My beloved wife for all her contribution, patience and understandings through- out my studies. She incredibly supported me and made it all possible for me. • My chidren for their patience and endurance and their love have always been my greatest inspiration. • My late younger brother Aminu S. Ajoge, whom I lost during the course ofUPM my study COPYRIGHT © Abstract of thesis presented to the Senate of Universiti Putra Malaysia in fulfilment of the requirement for the degree of Master of Science MEDIAN POLISH TECHNIQUES FOR ANALYSING PAIRED DATA By IDRIS AJOGE January 2017 Chairman: Mohd Bakri Adam, PhD UPM Faculty: Science Median polish is used as a data analysis technique for examining the significance of various factors in single or multi-way models.
    [Show full text]
  • Using Weighted Regression Model for Estimating Cohort Effect in Age- Period Contingency Table Data
    www.oncotarget.com Oncotarget, 2018, Vol. 9, (No. 28), pp: 19826-19835 Research Paper Using weighted regression model for estimating cohort effect in age- period contingency table data I-Shiang Tzeng1,2,3, Chau Yee Ng4,5, Jau-Yuan Chen6,7, Li-Shya Chen8 and Chin- Chieh Wu9 1Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan 2Department of Statistics, National Taipei University, Taipei, Taiwan 3Bachelor's Program of Financial Engineering and Actuarial Science, Feng Chia University, Taichung City, Taiwan 4Department of Dermatology, Drug Hypersensitivity Clinical and Research Center, Chang Gung Memorial Hospital, Taipei, Taiwan 5School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan 6Department of Family Medicine, Chang-Gung Memorial Hospital, Linkou Branch, Taoyuan City, Taiwan 7College of Medicine, Chang Gung University, Taoyuan, Taiwan 8Department of Statistics, National Chengchi University, Taipei, Taiwan 9Department of Emergency Medicine, Chang Gung Memorial Hospital, Keelung, Taiwan Correspondence to: I-Shiang Tzeng, email: [email protected] Keywords: age-period-cohort; multiphase method; prediction; hepatocellular carcinoma Received: May 18, 2017 Accepted: February 22, 2018 Published: April 13, 2018 Copyright: Tzeng et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 (CC BY 3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. ABSTRACT Background: Recently, the multiphase method was proposed to estimate cohort effects after removing the effects of age and period in age-period contingency table data. Hepatocellular carcinoma (HCC) is the most common primary malignancy of the liver and is strongly associated with cirrhosis, due to both alcohol and viral etiologies.
    [Show full text]