Data Processing

Total Page:16

File Type:pdf, Size:1020Kb

Data Processing Data Processing Basic Syntax with Stata 14.1 Cheat Sheet All Stata functions have the same format (syntax): For more info see Stata’s reference manual (stata.com) [by varlist1:] command [varlist2] [=exp] [if exp] [in range] [weight] [using filename] [,options] apply the function: what are column to save output as condition: only apply to apply pull data from a file special options Useful Shortcuts command across you going to do apply a new variable apply the function specific rows weights (if not loaded) for command each unique to varlists? command to if something is true combination of F2 keyboard buttons Ctrl + 9 variables in In this example, we want a detailed summary with stats like kurtosis, plus mean and median describe data open a new .do file varlist1 bysort rep78 : summarize price if foreign == 0 & price <= 9000, detail Ctrl + 8 Ctrl + D To find out more about any command – like what options it takes – type help command open the data editor highlight text in .do file, clear then ctrl + d executes it delete data in memory in the command line Basic Data Operations Change Data Types AT COMMAND PROMPT Arithmetic Logic == tests if something is equal Stata has 6 data types, and data can also be missing: = assigns a value to a variable no data true/false words numbers add (numbers) & and == equal < less than missing byte string int long float double PgUp PgDn scroll through previous commands + combine (strings) ! or ~ not != not <= less than or equal to To convert between numbers & strings: − subtract or > greater than gen foreignString = string(foreign) Tab autocompletes variable name after typing part | or ~= equal "1" >= greater or equal to 1 tostring foreign, gen(foreignString) "1" * multiply decode foreign , gen(foreignString) "foreign" cls clear the console (where results are displayed) if foreign != 1 & price >= 10000 if foreign != 1 | price >= 10000 make foreign price make foreign price gen foreignNumeric = real(foreignString) "1" Set up / divide Chevy Colt 0 3,984 Chevy Colt 0 3,984 Buick Riviera 0 10,372 Buick Riviera 0 10,372 1 destring foreignString, gen(foreignNumeric) "1" Honda Civic 1 4,499 Honda Civic 1 4,499 encode , gen( ) pwd ^ raise to a power Volvo 260 1 11,995 Volvo 260 1 11,995 foreignString foreignNumeric "foreign" print current (working) directory recast double mpg cd "C:\Program Files (x86)\Stata13" Explore Data generic way to convert between types change working directory dir VIEW DATA ORGANIZATION SEE DATA DISTRIBUTION Summarize Data describe make price codebook make price display filenames in working directory include missing values create binary variable for every rep78 display variable type, format, overview of variable type, stats, value in a new variable, repairRecord fs *.dta and any value/variable labels number of missing/unique values List all Stata data in working directory underlined parts tabulate rep78, mi gen(repairRecord) count summarize make price mpg one-way table: number of rows with each value of rep78 capture log close are shortcuts – use "capture" count if price > 5000 print summary statistics tabulate rep78 foreign, mi close the log on any existing do files or "cap" number of rows (observations) (mean, stdev, min, max) two-way table: cross-tabulate number of observations log using "myDoFile.txt", replace Can be combined with logic for variables for each combination of rep78 and foreign create a new log file to record your work and results ds, has(type string) inspect mpg bysort rep78: tabulate foreign search mdesc lookfor "in." show histogram of data, for each value of rep78, apply the command tabulate foreign packages contain number of missing or zero find the package mdesc to install search for variable types, tabstat price weight mpg, by(foreign) stat(mean sd n) extra commands that variable name, or variable label observations ssc install mdesc expand Stata’s toolkit create compact table of summary statistics displays stats isid mpg histogram mpg, frequency formats numbers for all data install the package mdesc; needs to be done once check if mpg uniquely plot a histogram of the table foreign, contents(mean price sd price) f(%9.2fc) row Import Data identifies the data distribution of a variable create a flexible table of summary statistics BROWSE OBSERVATIONS WITHIN THE DATA collapse (mean) price (max) mpg, by(foreign) replaces data sysuse auto, clear for many examples, we Missing values are treated as the largest calculate mean price & max mpg by car type (foreign) load system data (Auto data) use the auto dataset. browse or Ctrl + 8 positive number. To exclude missing values, Create New Variables use "yourStataFile.dta", clear open the data editor use the !missing(varname) syntax list make price if price > 10000 & !missing(price) clist ... (compact form) generate mpgSq = mpg^2 gen byte lowPr = price < 4000 load a dataset from the current directory frequently used commands are list the make and price for observations with price > $10,000 create a new variable. Useful also for creating binary import excel "yourSpreadsheet.xlsx", /* highlighted in yellow display price[4] variables based on a condition (generate byte) */ sheet("Sheet1") cellrange(A2:H11) firstrow display the 4th observation in price; only works on single values generate id = _n bysort rep78: gen repairIdx = _n import an Excel spreadsheet _n creates a running index of observations in a group gsort price mpg (ascending) gsort –price –mpg (descending) import delimited"yourFile.csv", /* generate totRows = _N bysort rep78: gen repairTot = _N sort in order, first by price then miles per gallon */ rowrange(2:11) colrange(1:8) varnames(2) _N creates a running count of the total observations per group import a .csv file duplicates report assert price!=. pctile mpgQuartile = mpg, nq = 4 finds all duplicate values in each variable verify truth of claim webuse set "https://github.com/GeoCenter/StataTraining/raw/master/Day2/Data" create quartiles of the mpg data webuse "wb_indicators_long" levelsof rep78 egen meanPrice = mean(price), by(foreign) see help egen set web-based directory and load data from the web display the unique values for rep78 calculate mean price for each group in foreign for more options Tim Essam ([email protected]) • Laura Hughes ([email protected]) inspired by RStudio’s awesome Cheat Sheets (rstudio.com/resources/cheatsheets) geocenter.github.io/StataTraining updated January 2016 follow us @StataRGIS and @flaneuseks Disclaimer: we are not affiliated with Stata. But we like it. CC BY 4.0 Data Transformation Reshape Data Manipulate Strings with Stata 14.1 Cheat Sheet webuse set https://github.com/GeoCenter/StataTraining/raw/master/Day2/Data GET STRING PROPERTIES For more info see Stata’s reference manual (stata.com) webuse "coffeeMaize.dta" load demo dataset display length("This string has 29 characters") MELT DATA (WIDE → LONG) return the length of the string Select Parts of Data (Subsetting) reshape variables starting unique id create new variable which captures charlist make * user-defined package with coffee and maize variable (key) the info in the column names SELECT SPECIFIC COLUMNS display the set of unique characters within a string drop make reshape long coffee@ maize@, i(country) j(year) new variable display strpos("Stata", "a") remove the 'make' variable convert a wide dataset to long return the position in Stata where a is first found TIDY DATASETS have keep make price WIDE LONG (TIDY) each observation FIND MATCHING STRINGS country year coffee maize opposite of drop; keep only variables 'make' and 'price' country coffee coffee maize maize melt in its own row and display strmatch("123.89", "1??.?9") 2011 2012 2011 2012 Malawi 2011 FILTER SPECIFIC ROWS Malawi Malawi 2012 each variable in its return true (1) or false (0) if string matches pattern drop if mpg < 20 drop in 1/4 Rwanda Rwanda 2011 own column. display substr("Stata", 3, 5) Uganda cast Rwanda 2012 drop observations based on a condition (left) Uganda 2011 return the string located between characters 3-5 Uganda 2012 or rows 1-4 (right) CAST DATA (LONG → WIDE) When datasets are list make if regexm(make, "[0-9]") keep in 1/30 what will be create new variables tidy, they have a list observations where make matches the regular opposite of drop; keep only rows 1-30 create new variables named unique id with the year added consistent, expression (here, records that contain a number) keep if inrange(price, 5000, 10000) coffee2011, maize2012... variable (key) to the column name standard format that is easier to list if regexm(make, "(Cad.|Chev.|Datsun)") keep values of price between $5,000 – $10,000 (inclusive) reshape wide coffee maize, i(country) j(year) return all observations where make contains keep if inlist(make, "Honda Accord", "Honda Civic", "Subaru") manipulate and convert a long dataset to wide analyze. "Cad.", "Chev." or "Datsun" keep the specified values of make compare the given list against the first word in make xpose, clear varname sample 25 transpose rows and columns of data, clearing the data and saving list if inlist(word(make, 1), "Cad.", "Chev.", "Datsun") sample 25% of the observations in the dataset old column names as a new variable called "_varname" return all observations where the first word of the (use set seed # command for reproducible sampling) make variable contains the listed words Replace Parts of Data Combine Data TRANSFORM STRINGS display regexr("My string", "My", "Your") CHANGE COLUMN NAMES ADDING (APPENDING) NEW DATA replace string1 ("My") with string2 ("Your") rename (rep78 foreign) (repairRecord
Recommended publications
  • If I Were Doing a Quick ANOVA Analysis
    ANOVA Note: If I were doing a quick ANOVA analysis (without the diagnostic checks, etc.), I’d do the following: 1) load the packages (#1); 2) do the prep work (#2); and 3) run the ggstatsplot::ggbetweenstats analysis (in the #6 section). 1. Packages needed. Here are the recommended packages to load prior to working. library(ggplot2) # for graphing library(ggstatsplot) # for graphing and statistical analyses (one-stop shop) library(GGally) # This package offers the ggpairs() function. library(moments) # This package allows for skewness and kurtosis functions library(Rmisc) # Package for calculating stats and bar graphs of means library(ggpubr) # normality related commands 2. Prep Work Declare factor variables as such. class(mydata$catvar) # this tells you how R currently sees the variable (e.g., double, factor) mydata$catvar <- factor(mydata$catvar) #Will declare the specified variable as a factor variable 3. Checking data for violations of assumptions: a) relatively equal group sizes; b) equal variances; and c) normal distribution. a. Group Sizes Group counts (to check group frequencies): table(mydata$catvar) b. Checking Equal Variances Group means and standard deviations (Note: the aggregate and by commands give you the same results): aggregate(mydata$intvar, by = list(mydata$catvar), FUN = mean, na.rm = TRUE) aggregate(mydata$intvar, by = list(mydata$catvar), FUN = sd, na.rm = TRUE) by(mydata$intvar, mydata$catvar, mean, na.rm = TRUE) by(mydata$intvar, mydata$catvar, sd, na.rm = TRUE) A simple bar graph of group means and CIs (using Rmisc package). This command is repeated further below in the graphing section. The ggplot command will vary depending on the number of categories in the grouping variable.
    [Show full text]
  • Healthy Volunteers (Retrospective Study) Urolithiasis Healthy Volunteers Group P-Value 5 (N = 110) (N = 157)
    Supplementary Table 1. The result of Gal3C-S-OPN between urolithiasis and healthy volunteers (retrospective study) Urolithiasis Healthy volunteers Group p-Value 5 (n = 110) (n = 157) median (IQR 1) median (IQR 1) Gal3C-S-OPN 2 515 1118 (810–1543) <0.001 (MFI 3) (292–786) uFL-OPN 4 14 56392 <0.001 (ng/mL/mg protein) (10–151) (30270-115516) Gal3C-S-OPN 2 52 0.007 /uFL-OPN 4 <0.001 (5.2–113.0) (0.003–0.020) (MFI 3/uFL-OPN 4) 1 IQR, Interquartile range; 2 Gal3C-S-OPN, Gal3C-S lectin reactive osteopontin; 3 MFI, mean fluorescence intensity; 4 uFL-OPN, Urinary full-length-osteopontin; 5 p-Value, Mann–Whitney U-test. Supplementary Table 2. Sex-related difference between Gal3C-S-OPN and Gal3C-S-OPN normalized to uFL- OPN (retrospective study) Group Urolithiasis Healthy volunteers p-Value 5 Male a Female b Male c Female d a vs. b c vs. d (n = 61) (n = 49) (n = 57) (n = 100) median (IQR 1) median (IQR 1) Gal3C-S-OPN 2 1216 972 518 516 0.15 0.28 (MFI 3) (888-1581) (604-1529) (301-854) (278-781) Gal3C-S-OPN 2 67 42 0.012 0.006 /uFL-OPN 4 0.62 0.56 (9-120) (4-103) (0.003-0.042) (0.002-0.014) (MFI 3/uFL-OPN 4) 1 IQR, Interquartile range; 2 Gal3C-S-OPN, Gal3C-S lectin reactive osteopontin; 3MFI, mean fluorescence intensity; 4 uFL-OPN, Urinary full-length-osteopontin; 5 p-Value, Mann–Whitney U-test.
    [Show full text]
  • Violin Plots: a Box Plot-Density Trace Synergism Author(S): Jerry L
    Violin Plots: A Box Plot-Density Trace Synergism Author(s): Jerry L. Hintze and Ray D. Nelson Source: The American Statistician, Vol. 52, No. 2 (May, 1998), pp. 181-184 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2685478 Accessed: 02/09/2010 11:01 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=astata. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to The American Statistician. http://www.jstor.org StatisticalComputing and Graphics ViolinPlots: A Box Plot-DensityTrace Synergism JerryL.
    [Show full text]
  • Statistical Analysis in JASP
    Copyright © 2018 by Mark A Goss-Sampson. All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the author except for the purposes of research, education or private study. CONTENTS PREFACE .................................................................................................................................................. 1 USING THE JASP INTERFACE .................................................................................................................... 2 DESCRIPTIVE STATISTICS ......................................................................................................................... 8 EXPLORING DATA INTEGRITY ................................................................................................................ 15 ONE SAMPLE T-TEST ............................................................................................................................. 22 BINOMIAL TEST ..................................................................................................................................... 25 MULTINOMIAL TEST .............................................................................................................................. 28 CHI-SQUARE ‘GOODNESS-OF-FIT’ TEST............................................................................................. 30 MULTINOMIAL AND Χ2 ‘GOODNESS-OF-FIT’ TEST. ..........................................................................
    [Show full text]
  • Beanplot: a Boxplot Alternative for Visual Comparison of Distributions
    Beanplot: A Boxplot Alternative for Visual Comparison of Distributions Peter Kampstra VU University Amsterdam Abstract This introduction to the R package beanplot is a (slightly) modified version of Kamp- stra(2008), published in the Journal of Statistical Software. Boxplots and variants thereof are frequently used to compare univariate data. Boxplots have the disadvantage that they are not easy to explain to non-mathematicians, and that some information is not visible. A beanplot is an alternative to the boxplot for visual comparison of univariate data between groups. In a beanplot, the individual observations are shown as small lines in a one-dimensional scatter plot. Next to that, the estimated density of the distributions is visible and the average is shown. It is easy to compare different groups of data in a beanplot and to see if a group contains enough observations to make the group interesting from a statistical point of view. Anomalies in the data, such as bimodal distributions and duplicate measurements, are easily spotted in a beanplot. For groups with two subgroups (e.g., male and female), there is a special asymmetric beanplot. For easy usage, an implementation was made in R. Keywords: exploratory data analysis, descriptive statistics, box plot, boxplot, violin plot, density plot, comparing univariate data, visualization, beanplot, R, graphical methods, visu- alization. 1. Introduction There are many known plots that are used to show distributions of univariate data. There are histograms, stem-and-leaf-plots, boxplots, density traces, and many more. Most of these plots are not handy when comparing multiple batches of univariate data. For example, com- paring multiple histograms or stem-and-leaf plots is difficult because of the space they take.
    [Show full text]
  • Chapter 6: Graphics
    Chapter 6 Graphics Modern computers with high resolution displays and graphic printers have rev- olutionized the visual display of information in fields ranging from computer- aided design, through flow dynamics, to the spatiotemporal attributes of infec- tious diseases. The impact on statistics is just being felt. Whole books have been written on statistical graphics and their contents are quite heterogeneous– simple how-to-do it texts (e.g., ?; ?), reference works (e.g., Murrell, 2006) and generic treatment of the principles of graphic presentation (e.g., ?). There is even a web site devoted to learning statistics through visualization http: //www.seeingstatistics.com/.Hence,itisnotpossibletobecomprehensive in this chapter. Instead, I focus on the types of graphics used most often in neu- roscience (e.g., plots of means) and avoid those seldom used in the field (e.g., pie charts, geographical maps). The here are four major purposes for statistical graphics. First, they are used to examine and screen data to check for abnormalities and to assess the distributions of the variables. Second, graphics are very useful aid to exploratory data analysis (??). Exploratory data analysis, however, is used for mining large data sets mostly for the purpose of hypothesis generation and modification, so that use of graphics will not be discussed here. Third, graphics can be used to assess both the assumptions and the validity of a statistical model applied to data. Finally, graphics are used to present data to others. The third of these purposes will be discussed in the appropriate sections on the statistics. This chapter deals with the first and last purposes–examining data and presenting results.
    [Show full text]
  • Dynamics of Hydrological Model Parameters: Calibration and Reliability
    Dynamics of hydrological model parameters: calibration and reliability Tian Lan1, Kairong Lin1,2,3, Xuezhi Tan1,2,3, Chong-Yu Xu4, Xiaohong Chen1,2,3 1Center for Water Resources and Environment, Sun Yat-sen University, Guangzhou, 510275, China. 2Guangdong Engineering Technology Research Center of Water Security Regulation and Control for Southern China, Guangzhou 510275, China. 3School of Civil Engineering, Sun Yat-sen University, Guangzhou, 510275, China. 4Department of Geosciences, University of Oslo, P.O. Box 1047, Blindern, 0316 Oslo, Norway Correspondence to: Kairong Lin ([email protected]) Contents of this file 1 Case study and data 2 HYMOD model 3 SCE-UA 4 Violin plot 5 Multi-metric evaluation 6 Evaluation of sub-period calibration schemes in Mumahe basin and Xunhe basin 7 Convergence performance in Mumahe basin and Xunhe basin using ECP-VP References Introduction This supporting information includes eight sections that support the analysis. The 1 Case study and data section is used to support the 2 Background section in the main manuscript. The 2 HYMOD model section is used to support the 3.1 Sub-period calibration section in the main manuscript. The 3 SCE-UA algorithm section and 4 Violin plot section are used to support the 3.2 A tool for reliability evaluation section in the main manuscript. The 5 Multi-metric evaluation section and 6 Evaluation of sub-period calibration schemes in Mumahe basin and Xunhe basin section are used to account for 4.1 Evaluation of calibration schemes section in the main manuscript. The 7 Convergence performance in Mumahe basin and Xunhe basin using ECP-VP section is used to supplement 4.3 Evaluation of reliability section in the main manuscript.
    [Show full text]
  • Modern Statistics for Modern Biology
    Contents 3 High Quality Graphics in R 5 3.1 Goals for this Chapter .......................... 5 3.2 Base R Plotting .............................. 6 3.3 An Example Dataset ........................... 7 3.4 ggplot2 .................................. 8 3.5 The Grammar of Graphics ........................ 9 3.6 1D Visualisations ............................. 12 3.6.1 Data tidying I – matrices versus data.frame . 12 3.6.2 Barplots ............................. 13 3.6.3 Boxplots ............................. 13 3.6.4 Violin plots ........................... 13 3.6.5 Dot plots and beeswarm plots . 14 3.6.6 Density plots ........................... 14 3.6.7 ECDF plots ............................ 15 3.6.8 Transformations ......................... 15 3.6.9 Data tidying II - wide vs long format . 16 3.7 2D Visualisation: Scatter Plots ...................... 17 3.7.1 Plot shapes ............................ 18 3.8 3–5D Data ................................. 19 3.8.1 Faceting ............................. 21 3.8.2 Interactive graphics ....................... 22 3.9 Colour ................................... 23 3.9.1 Heatmaps ............................ 24 3.9.2 Colour spaces .......................... 26 3.10 Data Transformations .......................... 27 3.11 Saving Figures .............................. 27 3.12 Biological Data with ggbio ........................ 28 3.13 Recap of this Chapter ........................... 29 3.14 Exercises ................................. 29 6 Multiple Testing 31 6.1 Goals for this Chapter .........................
    [Show full text]
  • Moving Beyond P Values: Everyday Data Analysis with Estimation Plots
    bioRxiv preprint doi: https://doi.org/10.1101/377978 ; this version posted April 6, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-ND 4.0 International license. Moving beyond ​P​ values: Everyday data analysis with estimation plots Joses Ho​ 1​, Tayfun Tumkaya​ 1, 2​, Sameer Aryal​ 1, 3 ,​ Hyungwon Choi​ 1, 4​, Adam Claridge-Chang​ 1, 2, 5, 6 1. Institute for Molecular and Cell Biology, A*STAR, Singapore 138673 2. Department of Physiology, National University of Singapore, Singapore 3. Center for Neural Science, New York University, New York, NY, USA 4. Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 5. Program in Neuroscience and Behavioral Disorders, Duke-NUS Medical School, Singapore 6. Correspondence Introduction Over the past 75 years, a number of statisticians have advised that the data-analysis method known as null-hypothesis significance testing (NHST) should be deprecated ​(Berkson, 1942; Halsey et al., 2015; Wasserstein et al., 2019)​. The limitations of NHST have been extensively discussed, with a broad consensus that current statistical practice in the biological sciences needs reform. However, there is less agreement on reform’s specific nature, with vigorous debate surrounding what would constitute a suitable alternative (Altman et al., 2000; Benjamin et al., 2017; Cumming and Calin-Jageman, 2016)​. An emerging view is that a more complete analytic technique would use statistical graphics to estimate effect sizes and evaluate their uncertainty ​(Cohen, 1994; Cumming and Calin-Jageman, 2016)​.
    [Show full text]
  • A Machine Learning Framework for Computationally Expensive Transient Models Prashant Kumar1,4, Kushal Sinha 2,3*, Nandkishor K
    www.nature.com/scientificreports OPEN A machine learning framework for computationally expensive transient models Prashant Kumar1,4, Kushal Sinha 2,3*, Nandkishor K. Nere2,3, Yujin Shin1,5, Raimundo Ho1, Laurie B. Mlinar3 & Ahmad Y. Sheikh1 Transient simulations of dynamic systems, using physics-based scientifc computing tools, are practically limited by availability of computational resources and power. While the promise of machine learning has been explored in a variety of scientifc disciplines, its application in creation of a framework for computationally expensive transient models has not been fully explored. Here, we present an ensemble approach where one such computationally expensive tool, discrete element method, is combined with time-series forecasting via auto regressive integrated moving average and machine learning methods to simulate a complex pharmaceutical problem: development of an agitation protocol in an agitated flter dryer to ensure uniform solid bed mixing. This ensemble approach leads to a signifcant reduction in the computational burden, while retaining model accuracy and performance, practically rendering simulations possible. The developed machine-learning model shows good predictability and agreement with the literature, demonstrating its tremendous potential in scientifc computing. Machine learning has emerged as one of the most promising technologies in the past decade due to its capability to provide valuable insights1 into vast amounts of data generated during the Internet era. Rapid democratization of machine learning tools has allowed for the successful adoption of the technology in a wide range of felds including robotics, computer vision2, speech and natural language processing3, autonomous driving4, neurosci- ence, drug-discovery5 and in fundamental sciences6. However, its application to computational sciences, and applied computational physics in general, has been limited.
    [Show full text]
  • Effect of Continuous Saline Bladder Irrigation with Concomitant Single Instillation of Chemotherapy After Transurethral Resecti
    MOLECULAR AND CLINICAL ONCOLOGY 13: 6, 2020 Effect of continuous saline bladder irrigation with concomitant single instillation of chemotherapy after transurethral resection on intravesical recurrence in patients with non‑muscle‑invasive bladder cancer KENJI KURODA, SHINSUKE TASAKI, AKINORI SATO, JUNICHI ASAKUMA, AKIO HORIGUCHI and KEIICHI ITO Department of Urology, National Defense Medical College, Tokorozawa, Saitama 359‑8513, Japan Received April 29, 2019; Accepted June 1, 2020 DOI: 10.3892/mco.2020.2079 Abstract. A single immediate instillation of chemotherapy the matched groups B and C (P=0.0255 and P=0.0023, respec‑ following transurethral resection of bladder tumor (TURBT) tively). In conclusion, SIC alone could provide a higher IVR‑free is effective in preventing intravesical recurrence (IVR) in survival rate than CSBI with DXR or CSBI with SIC. patients with non‑muscle‑invasive urothelial bladder carcinoma (NMIBC). However, continuous saline bladder irrigation (CSBI) Introduction is also performed with a single instillation of chemotherapy (SIC), but its inhibitory effect on IVR remains unclear. In the present Bladder cancer can present in different pathological stages. study, the effect of CSBI with concomitant SIC following TUR Approximately 80% of all bladder cancers initially present on IVR was evaluated in patients with NMIBC. A retrospec‑ as non‑muscle‑invasive bladder carcinoma (NMIBC) (1). tive review of 253 patients who underwent TURBT and were Transurethral resection of bladder tumor (TURBT) is known clinically and histologically diagnosed with NMIBC at National as the gold standard therapeutic method for NMIBC; however, Defense Medical College Hospital was performed. Doxorubicin the recurrence rate ranges between 40 and 80% regardless of (DXR) was administered to all patients.
    [Show full text]
  • Raincloud Plots: a Multi-Platform Tool for Robust Data Visualization
    Raincloud plots: a multi-platform tool for robust data visualization Micah Allen1, Davide Poggiali2,3, Kirstie Whitaker1,4, Tom Rhys Marshall5, Rogier Kievit6,7 1Department of Psychiatry, University of Cambridge, UK 2Department of Mathematics, University of Padova, Padova, Italy 3Padova Neuroscience Center, University of Padova, Padova, Italy 4Alan Turing Institute, London, UK 5Department of Experimental Psychology, University of Oxford, UK 6Department of Psychology, University of Cambridge, UK 7Max-Planck Centre for Computational Psychiatry and Aging, London/Berlin Correspondence should be addressed to Micah Allen, Cambridge Psychiatry: [email protected] Abstract Across scientific disciplines, there is a rapidly growing recognition of the need for more statistically robust, transparent approaches to data visualization. Complimentary to this, many scientists have realized the need for plotting tools that accurately and transparently convey key aspects of statistical effects and raw data with minimal distortion. Previously common approaches, such as plotting conditional mean or median barplots together with error-bars have been criticized for distorting effect size, hiding underlying patterns in the raw data, and obscuring the assumptions upon which the most commonly used statistical tests are based. Here we describe a data visualization approach which overcomes these issues, providing maximal statistical information while preserving the desired ‘inference at a glance’ nature of barplots and other similar visualization devices. These “raincloud plots” can visualize raw data, probability density, and key summary statistics such as median, mean, and relevant confidence intervals in an appealing and flexible format with minimal redundancy. In this tutorial paper we provide basic demonstrations of the strength of raincloud plots and similar approaches, outline potential modifications for their optimal use, and provide open-source code for their streamlined implementation in R, Python and Matlab (https://github.com/RainCloudPlots/RainCloudPlots).
    [Show full text]