Visualizing Multivariate Data

Total Page:16

File Type:pdf, Size:1020Kb

Visualizing Multivariate Data 1 Visual Analytics Visualizing multivariate data: High density time-series plots Scatterplot matrices Parallel coordinate plots Temporal and spectral correlation plots Box plots Wavelets Radar and /or polar plots Others…… Demonstrations of DVAtool and Matlab examples CCA 2017 Workshop on Process Data Analytics, S. Africa, December 2017 2 Visualizing Multivariate Data Many statistical analyses involve only two variables: a preDictor variable and a response variable. Such data are easy to visualize using 2D scatter plots, bivariate histograms, boxplots, etc. It's also possible to visualize trivariate data with 3D scatter plots, or 2D scatter plots with a third variable encoded with, for example color. However, many datasets involve a larger number of variables, making direct visualization more difficult. This demo explores some of the ways to visualize high- dimensional data in MATLAB®, using the Statistics Toolbox™. Contents High Density plots Scatterplot Matrices Parallel Coordinates Plots In this demo, we'll use the carbig dataset, a dataset that contains various measured variables for about 400 automobiles from the 1970's and 1980's. We'll illustrate multivariate visualization using the values for fuel efficiency (in miles per gallon, MPG), acceleration (time from 0-60MPH in sec), engine Displacement (in cubic inches), weight, and horsepower. We'll use the number of cylinders to group observations. >>load carbig >>X = [MPG,Acceleration,Displacement,Weight,Horsepower]; >>varNames = {'MPG'; 'Acceleration'; 'Displacement'; 'Weight'; 'Horsepower'}; Can do a simple graphical plot as follows: gplotmatrix(X); But this plot is not as revealing or informative as the next plot using the same ‘gplotmatrix’ function. Scatterplot Matrices CCA 2017 Workshop on Process Data Analytics, S. Africa, December 2017 3 Viewing slices through lower dimensional subspaces is one way to partially work around the limitation of two or three dimensions. For example, we can use the gplotmatrix function to display an array of all the bivariate scatterplots between our five variables, along with a univariate histogram for each variable. >>figure >>gplotmatrix(X,[],Cylinders,['c' 'b' 'm' 'g' 'r'],[],[],false); >>text([.08 .24 .43 .66 .83], repmat(-.1,1,5), varNames, 'FontSize',11); >>text(repmat(-.12,1,5), [.86 .62 .41 .25 .02], varNames, 'FontSize',11, 'Rotation',90); The points in each scatterplot are color-coded by the number of cylinders: blue for 4 cylinders, green for 6, and red for 8. There is also a handful of 5 cylinder cars, and rotary-engined cars are listed as having 3 cylinders. This array of plots makes it easy to pick out patterns in the relationships between pairs of variables. However, there may be important patterns in higher dimensions, and those are not easy to recognize in this plot. Parallel Coordinates Plots The scatterplot matrix only displays bivariate relationships. However, there are other alternatives that display all the variables together, allowing you to investigate higher-dimensional relationships among variables. The most straight-forward multivariate plot is the parallel coordinates plot. In this plot, the coordinate axes are all laiD out horizontally, instead of using orthogonal axes as in the usual Cartesian graph. Each observation is represented in the plot as a series of connected line CCA 2017 Workshop on Process Data Analytics, S. Africa, December 2017 4 segments. For example, we can make a plot of all the cars with 4, 6, or 8 cylinders, and color observations by group. >>Cyl468 = ismember(Cylinders,[4 6 8]); >>parallelcoords(X(Cyl468,:), 'group',Cylinders(Cyl468), ... 'standardize','on', 'labels',varNames) The horizontal direction in this plot represents the coordinate axes, and the vertical direction represents the data. Each observation consists of measurements on five variables, and each measurement is represented as the height at which the corresponding line crosses each coordinate axis. Because the five variables have widely different ranges, this plot was made with standardized values, where each variable has been standardized to have zero mean and unit variance. With the color coding, the graph shows, for example, that 8 cylinder cars typically have low values for MPG and acceleration, and high values for displacement, weight, and horsepower. Even with color coding by group, a parallel coordinates plot with a large number of observations can be difficult to read. We can also make a parallel coorDinates plot where only the median and quartiles (25% and 75% points) for each group are shown. This makes the typical differences and similarities among groups easier to distinguish. On the other hand, it may be the outliers for each group that are most interesting, and this plot does not show them at all. >>parallelcoorDs(X(Cyl468,:), 'group',Cylinders(Cyl468), 'standardize','on', CCA 2017 Workshop on Process Data Analytics, S. Africa, December 2017 5 …'labels',varNames, 'quantile',.25) Here are examples of High density and correlation colour plots. Such plots allow one to visualize ‘multivariate’ relationships at work. These can also be experienced using ‘hands-on’ software tools such as the DVAtool developed by the U of Alberta group. CCA 2017 Workshop on Process Data Analytics, S. Africa, December 2017 6 Tag names Time Trends 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 1 512 Samples The above plot has a scroll feature so the relationships (or correlation or cause and effect analysis) can be explored. According to Shewhart, data should never be buried, it should be plotted and examined visually. The above data is from a refinery that had experienced plant-wiDe oscillations. The root cause of the oscillations was difficult to diagnose. A simplified schematic of the refinery appears below: The colour-coded temporal correlation plot of the data appears below. It is important to remember that even if 2 variables are oscillating at the same CCA 2017 Workshop on Process Data Analytics, S. Africa, December 2017 7 frequency, their correlation can be zero if they are phase shifted by 90 degrees, that is if they are orthogonal. Thus one has to be careful when looking at correlation analysis in the time domain. Ideally such analysis should be carried out on lag- adjusteD variables. The following plots illustrate this concept. Information overload?? 1 -0.185294206 0.027055773 -0.171819975 0.093276222 -0.027994229 0.098787646 -0.020583535 0.032817213 -0.132101895 0.007443381 -0.185294206 1 0.046731844 0.192725409 0.06221338 -0.121835155 0.007001147 0.072787318 0.256501727 -0.332697412 0.301121428 0.027055773 0.046731844 1 0.497561061 0.097669495 -0.042977998 0.073150009 0.01752308 0.446311963 -0.065613281 0.577804913 -0.171819975 0.192725409 0.497561061 1 0.098720947 0.04428365 0.109814696 0.207062089 0.277424246 0.017018503 0.374057968 0.093276222 0.06221338 0.097669495 0.098720947 1 -0.16665222 0.088320804 0.017838289 0.083761143 0.0118807 0.047680363 -0.027994229 -0.121835155 -0.042977998 0.04428365 -0.16665222 1 -0.083250958 -0.113249872 -0.066049714 0.083771862 -0.024636992 0.098787646 0.007001147 0.073150009 0.109814696 0.088320804 -0.083250958 1 0.11444586 0.18562112 -0.115407265 0.14030574 -0.020583535 0.072787318 0.01752308 0.207062089 0.017838289 -0.113249872 0.11444586 1 0.008346999 -0.031577329 -0.029811791 0.032817213 0.256501727 0.446311963 0.277424246 0.083761143 -0.066049714 0.18562112 0.008346999 1 -0.548843663 0.897175324 -0.132101895 -0.332697412 -0.065613281 0.017018503 0.0118807 0.083771862 -0.115407265 -0.031577329 -0.548843663 1 -0.410961064 0.007443381 0.301121428 0.577804913 0.374057968 0.047680363 -0.024636992 0.14030574 -0.029811791 0.897175324 -0.410961064 1 0.074563952Imagine-0.290999002 -0.166431901 looking-0.191828797 at0.137354964 thousands-0.318379029 0.361032714 of0.140153123 numerical0.041112456 0.020956227 -0.121890887 -0.113935147 0.096219583 -0.05286439 -0.015289902 0.019324048 0.016093852 0.079420974 0.30604406 0.012369794 -0.118651352 -0.024855722 -0.004367924values0.415302265 0.212895416of data!0.422025428 0.121021716 -0.096013616 0.208845673 0.062699756 0.305190719 -0.192434667 0.383145433 -0.041892367 0.292542753 0.088478765 0.242138917 0.091224942 -0.044085648 0.030151736 -0.048883722 -0.018817843 0.001127689 0.090586488 -0.012796946 0.428193732 0.148027299 0.374229362 -0.036415302 0.031485232 0.014195414 0.093444237 0.111307503 -0.142990907 0.198198421 -0.072987889Looking0.639653323 0.240224974 for a0.430130045 needle0.025582625 in0.028934191 a haystack?0.074669894 -0.003315806 0.264468969 -0.263129122 0.384595716 -0.081425541 0.650446804 0.264165028 0.526494862 0.024959155 0.016578929 0.086833576 0.043588107 0.234276435 -0.236008471 0.358375999 0.220771143 -0.298196834 -0.095225139 -0.369718105 0.018743973 0.082060135 -0.102865842 0.113548775 -0.131073669 0.077167167 -0.117265396 -0.024869432 -0.190880521 0.216974667 0.238447454 0.043845472 0.02495333 -0.005302967 -0.052698509 0.117642296 0.030820275 0.12686478 -0.030647599 0.099397397 0.060271104 -0.149448693 0.101629373 0.107726218 0.138653219 -0.135938599 0.29944496 -0.143383865 0.324332399 -0.105195337 -0.1838672 -0.148673467 -0.213141108 0.059956448 0.018061058 -0.017384883 -0.104230537 0.034619576 0.092101398 -0.031918291 0.017414435Without-0.037744275 0.109437444 a proper-0.048190722 -0.019413361 data 0.085313857analysis-0.03284987 tool,-0.090511989 such0.148464685 -0.022750366 0.185438833 0.038764349 -0.713947863 -0.052827235 -0.199718753 -0.077278511 0.145219414 -0.072139066 -0.148773823 -0.170502021 0.300067528
Recommended publications
  • M2 Mid-Module Assessment Task
    NYS COMMON CORE MATHEMATICS CURRICULUM Mid-Module Assessment Task M2 ALGEBRA I Name Date 1. The scores of three quizzes are shown in the following data plot for a class of 10 students. Each quiz has a maximum possible score of 10. Possible dot plots of the data are shown below. a. On which quiz did students tend to score the lowest? Justify your choice. b. Without performing any calculations, which quiz tended to have the most variability in the students’ scores? Justify your choice based on the graphs. Module 2: Descriptive Statistics 85 This work is licensed under a This work is derived from Eureka Math ™ and licensed by Great Minds. ©2015 Great Minds. eureka-math.org This file derived from ALG I-M2-TE-1.3.0-08.2015 Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. NYS COMMON CORE MATHEMATICS CURRICULUM Mid-Module Assessment Task M2 ALGEBRA I c. If you were to calculate a measure of variability for Quiz 2, would you recommend using the interquartile range or the standard deviation? Explain your choice. d. For Quiz 3, move one dot to a new location so that the modified data set will have a larger standard deviation than before you moved the dot. Be clear which point you decide to move, where you decide to move it, and explain why. e. On the axis below, arrange 10 dots, representing integer quiz scores between 0 and 10, so that the standard deviation is the largest possible value that it may have. You may use the same quiz score values more than once.
    [Show full text]
  • 3. Summarizing Distributions
    3. Summarizing Distributions A. Central Tendency 1. What is Central Tendency 2. Measures of Central Tendency 3. Median and Mean 4. Additional Measures 5. Comparing measures B. Variability 1. Measures of Variability C. Shape 1. Effects of Transformations 2. Variance Sum Law I D. Exercises Descriptive statistics often involves using a few numbers to summarize a distribution. One important aspect of a distribution is where its center is located. Measures of central tendency are discussed first. A second aspect of a distribution is how spread out it is. In other words, how much the numbers in the distribution vary from one another. The second section describes measures of variability. Distributions can differ in shape. Some distributions are symmetric whereas others have long tails in just one direction. The third section describes measures of the shape of distributions. The final two sections concern (1) how transformations affect measures summarizing distributions and (2) the variance sum law, an important relationship involving a measure of variability. 123 What is Central Tendency? by David M. Lane and Heidi Ziemer Prerequisites • Chapter 1: Distributions • Chapter 2: Stem and Leaf Displays Learning Objectives 1. Identify situations in which knowing the center of a distribution would be valuable 2. Give three different ways the center of a distribution can be defined 3. Describe how the balance is different for symmetric distributions than it is for asymmetric distributions. What is “central tendency,” and why do we want to know the central tendency of a group of scores? Let us first try to answer these questions intuitively. Then we will proceed to a more formal discussion.
    [Show full text]
  • Robust Shewhart-Type Control Charts for Process Location
    UvA-DARE (Digital Academic Repository) Robust control charts in statistical process control Nazir, H.Z. Publication date 2014 Link to publication Citation for published version (APA): Nazir, H. Z. (2014). Robust control charts in statistical process control. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:27 Sep 2021 Chapter 3 Robust Shewhart-type Control Charts for Process Location This chapter studies estimation methods for the location parameter. Estimation method based on a Phase I analysis as well as several robust location estimators are considered. The Phase I method and estimators are evaluated in terms of their mean squared errors and their effect on the control charts used for real-time process monitoring is assessed. It turns out that the Phase I control chart based on the trimmed trimean far outperforms the existing estimation methods.
    [Show full text]
  • Mgtop 215 Chapter 3 Dr. Ahn
    MgtOp 215 Chapter 3 Dr. Ahn Measures of central tendency (center, location): measures the middle point of a distribution or data; these include mean and median. Measures of dispersion (variability, spread): measures the extent to which the observations are scattered; these include standard deviation and range. Parameter: a numerical descriptive measure computed from the population measurements (census data). Statistic: a numerical descriptive measure computed from sample measurements. Throughout we denote N: the number of observations in the population, that is the population size, n: the number of observations in a sample, that is, the sample size, and xi : the i-th observation. Measures of Central Tendency (Location) Population mean 1 ⋯ Sample mean 1 ⋯ ̅ Formulas> Insert Function () >Statistical>AVERAGE Note that the population mean and sample mean are arithmetic means or average, and computed in the same manner. Example 1. Find the sample mean of the following data. 5, 7, 2, 0, 4 Deviation from the mean: observation minus the mean, i.e. or ̅. The mean is a measure of the center in the sense that it “balances” the deviations from the mean. Median: The middle value of data when ordered from smallest to largest Formulas> Insert Function () >Statistical>MEDIAN Example 2. Find the sample median of the data in Example 1. The median is a measure of the center in the sense that it “balances” the number of observations on both sides of the median. 1 Example 3. Find the median of the following data. 5, 7, 2, 4, 2, 1 The mode is useful as a measure of central tendency for qualitative data.
    [Show full text]
  • A Measure of Relative Efficiency for Location of a Single Sample Shlomo S
    Journal of Modern Applied Statistical Methods Volume 1 | Issue 1 Article 8 5-1-2002 A Measure Of Relative Efficiency For Location Of A Single Sample Shlomo S. Sawilowsky Wayne State University, [email protected] Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Sawilowsky, Shlomo S. (2002) "A Measure Of Relative Efficiency For Location Of A Single Sample," Journal of Modern Applied Statistical Methods: Vol. 1 : Iss. 1 , Article 8. DOI: 10.22237/jmasm/1020254940 Available at: http://digitalcommons.wayne.edu/jmasm/vol1/iss1/8 This Regular Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Journal O f Modem Applied Statistical Methods Copyright © 2002 JMASM, Inc. Winter 2002, Vol. 1, No. 1,52-60 1538 - 9472/02/$30.00 A Measure Of Relative Efficiency For Location Of A Single Sample Shlomo S. Sawilowsky Evaluation & Research College of Education Wayne State University The question of how much to trim or which weighting constant to use are practical considerations in applying robust methods such as trimmed means (L-estimators) and Huber statistics (M-estimators). An index of location relative effi­ ciency (LRE), which is a ratio of the narrowness of resulting confidence intervals, was applied to various trimmed means and Huber M-estimators calculated on seven representative data sets from applied education and psychology research.
    [Show full text]
  • Chap. 4: Summarizing & Describing Numerical Data
    Basic Business Statistics: Concepts & Applications Learning Objectives 1. Explain Numerical Data Properties 2. Describe Summary Measures Central Tendency Variation Shape 3. Analyze Numerical Data Using Summary Measures Thinking Challenge $400,000 $70,000 $50,000 ... employees cite low pay -- most workers earn only $30,000 $20,000. ... President claims average $20,000 pay is $70,000! Standard Notation Measure Sample Population Mean ⎯X μX Stand. Dev. S σX 2 2 Variance S σX Size n N Numerical Data Properties Central Tendency (Location) Variation (Dispersion) Shape Numerical Data Properties & Measures Numerical Data Properties Central Variation Shape Tendency Mean Range Skew Median Interquartile Kurtosis Range Mode Variance Midrange Standard Deviation Midhinge Coeff. of Variation Mean 1. Measure of Central Tendency 2. Most Common Measure 3. Acts as ‘Balance Point’ 4. Affected by Extreme Values (‘Outliers’) 5. Formula (Sample Mean) n ∑ Xi X1 + X2 + L + Xn X = i =1 = n n Mean Example Raw Data: 10.3 4.9 8.9 11.7 6.3 7.7 n X ∑ i X + X + X + X + X + X X = i =1 = 1 2 3 4 5 6 n 6 10.3 + 4.9 + 8.9 + 11.7 + 6.3 + 7.7 = 6 = 8.30 Median 1. Measure of Central Tendency 2. Middle Value In Ordered Sequence If Odd n, Middle Value of Sequence If Even n, Average of 2 Middle Values 3. Position of Median in Sequence n + 1 Positioning Point = 2 4. Not Affected by Extreme Values Median Example Odd-Sized Sample Raw Data: 24.1 22.6 21.5 23.7 22.6 Ordered: 21.5 22.6 22.6 23.7 24.1 Position: 1 2 3 45 n + 1 5 + 1 Positioning Point = = = 3.0 2 2 Median = 22.6 Median Example Even-Sized Sample Raw Data: 10.3 4.9 8.9 11.7 6.3 7.7 Ordered: 4.9 6.3 7.7 8.9 10.3 11.7 Position: 1 2 3456 n + 1 6 + 1 Positioning Point = = = 3.5 2 2 7.7 + 8.9 Median = = 8.30 2 Mode 1.
    [Show full text]
  • Modification of ANOVA with Various Types of Means
    Applied Mathematics and Computational Intelligence Volume 8, No.1, Dec 2019 [77-88] Modification of ANOVA with Various Types of Means Nur Farah Najeeha Najdi1* and Nor Aishah Ahad1 1School of Quantitative Sciences, Universiti Utara Malaysia. ABSTRACT In calculating mean equality for two or more groups, Analysis of Variance (ANOVA) is a popular method. Following the normality assumption, ANOVA is a robust test. A modification to ANOVA is proposed to test the equality of population means. The suggested research statistics are straightforward and are compatible with the generic ANOVA statistics whereby the classical mean is supplemented by other robust means such as geometric mean, harmonic mean, trimean and also trimmed mean. The performance of the modified ANOVAs is then compared in terms of the Type I error in the simulation study. The modified ANOVAs are then applied on real data. The performance of the modified ANOVAs is then compared with the classical ANOVA test in terms of Type I error rate. This innovation enhances the ability of modified ANOVAs to provide good control of Type I error rates. In general, the results were in favor of the modified ANOVAs especially ANOVAT and ANOVATM. Keywords: ANOVA, Geometric Mean, Harmonic Mean, Trimean, Trimmed Mean, Type I Error. 1. INTRODUCTION Analysis of variance (ANOVA) is one of the most used model which can be seen in many fields such as medicine, engineering, agriculture, education, psychology, sociology and biology to investigate the source of the variations. One-way ANOVA is based on assumptions that the normality of the observations and the homogeneity of group variances.
    [Show full text]
  • Modern Robust Data Analysis Methods: Measures of Central Tendency
    Psychological Methods Copyright 2003 by the American Psychological Association, Inc. 2003, Vol. 8, No. 3, 254–274 1082-989X/03/$12.00 DOI: 10.1037/1082-989X.8.3.254 Modern Robust Data Analysis Methods: Measures of Central Tendency Rand R. Wilcox H. J. Keselman University of Southern California University of Manitoba Various statistical methods, developed after 1970, offer the opportunity to substan- tially improve upon the power and accuracy of the conventional t test and analysis of variance methods for a wide range of commonly occurring situations. The authors briefly review some of the more fundamental problems with conventional methods based on means; provide some indication of why recent advances, based on robust measures of location (or central tendency), have practical value; and describe why modern investigations dealing with nonnormality find practical prob- lems when comparing means, in contrast to earlier studies. Some suggestions are made about how to proceed when using modern methods. The advances and insights achieved during the last more recent publications provide a decidedly different half century in statistics and quantitative psychology picture of the robustness of conventional techniques. provide an opportunity for substantially improving In terms of avoiding actual Type I error probabilities modern ,(05. ס ␣ ,.psychological research. Recently developed methods larger than the nominal level (e.g can provide substantially more power when the stan- methods and conventional techniques produce similar dard assumptions of
    [Show full text]
  • Stat 216 Excel/Phstat Homework Assignment #1
    Stat 216 Excel/PHStat Homework Assignment #1 Please visit www.stat.wmich.edu/s216/phstat.html to learn how to start up Excel/PHStat. Tip: Read through this entire assignment before getting onto a computer. The data that you will need for this assignment can be found in the worksheet called cars. The worksheet contains on 80 different kinds of cars. From Motor Trend Car, we see that the variables include MPG, CYL (4, 6, 8), DISP (size), HP, DRAT, WT, QSEC, VS (=0, =1), AM, GEAR (3, 4, 5), and CARB (0, 2, 4, 6, 8). You will be obtaining a subset of this data. Every student will have a different data set. Directions for obtaining your unique data subset: • Go to www.stat.wmich.edu/s216/info.html and click on Winter 2002 link and HW Data link. • Choose Cars from the list of data sets. • Click on the Submit button. • Put check marks next to all the variables under Select variable(s) to be resampled. • Enter 50 for Sample size. • Enter the last four digits of your student ID for the PIN Number. • Click on the Submit button. • Follow the instructions on the page that comes up on your browser, which will tell you to click on the link provided. A text file containing the data will come up next, which you should download. Use a floppy diskette if you are on a lab PC. • Next, open the file into Excel. Choose Text files under Files of type. • Choose Delimited for the Text Import Wizard step 1.
    [Show full text]
  • Estimation of the Hurst Exponent Using Trimean Estimators on Nondecimated Wavelet Coefficients Chen Feng and Brani Vidakovic
    JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 Estimation of the Hurst Exponent Using Trimean Estimators on Nondecimated Wavelet Coefficients Chen Feng and Brani Vidakovic Abstract—Hurst exponent is an important feature summariz- Among models having been proposed for analyzing the self- ing the noisy high-frequency data when the inherent scaling similar phenomena, arguably the most popular is the fractional pattern cannot be described by standard statistical models. In this Brownian motion (fBm) first described by Kolmogorov [3] and paper, we study the robust estimation of Hurst exponent based on non-decimated wavelet transforms (NDWT). The robustness formalized by Mandelbrot and Van Ness [4]. Its importance is achieved by applying a general trimean estimator on non- is due to the fact that fBm is the unique Gaussian process decimated wavelet coefficients of the transformed data. The with stationary increments that is self-similar. Recall that a d general trimean estimator is derived as a weighted average of stochastic process X (t) ; t 2 R is self-similar with Hurst the distribution’s median and quantiles, combining the median’s + d −H emphasis on central values with the quantiles’ attention to the exponent H if, for any λ 2 R , X (t) = λ X (λt). Here extremes. The properties of the proposed Hurst exponent estima- the notation =d means the equality in all finite-dimensional tors are studied both theoretically and numerically. Compared distributions. Hurst exponent H describes the rate at which with other standard wavelet-based methods (Veitch & Abry autocorrelations decrease as the lag between two realizations (VA) method, Soltani, Simard, & Boichu (SSB) method, median based estimators MEDL and MEDLA), our methods reduce the in a time series increases.
    [Show full text]
  • Math 366 Minitab Homework Assignment # 1 Due 28 February 2002
    Math 366 Minitab Homework Assignment # 1 Due 28 February 2002 Please visit http://unix.cc.wmich.edu/lheun/ Tip: Read through this entire assignment before getting on a computer. The data that you will need for this assignment can be found in the file called BB97.txt. Click on Minitab Assignment 1 data and SaveAs. Once the data is on your computer, you may need to use EXCEL to reformat it so that you can read it into a Minitab worksheet. The worksheet contains data on 28 baseball teams. The data contains the following variables: Baseball team League (L) (American = 0, National = 1), Number of Wins (W), E.R.A. (ERA) (Earned Run Average), Runs Scored (RS), Hits Allowed (HA), Walks Allowed (WA), Saves (S), and Errors (E). Open worksheet in Minitab and follow dialog box instructions. (+5)1. Obtain the descriptive statistics for ERA. and W. In your report, include the output of Minitab (copy and paste), and make an additional table listing these particular measures of central tendency: mean, median, midhinge and midrange. Also include these measures of variation in your table: range, IQR, variance, standard deviation, and coefficient of variation. Note: A couple of these measures (midrange, midhinge, and CV) will require you to make some additional hand calculations. (+5)2. Get a stem-and-leaf plot of S. Determine the mode of S from the plot? (+5)3. Obtain a histogram of ERA. Based on the histogram, what is the shape of the data? (+5)4. Get a boxplot of ERA score. What is the shape? Does this agree with the shape you reported in #3? Remember that tilting you head to the left or turning the printout clockwise helps you determine skewness direction, if any.
    [Show full text]
  • Redalyc.Exploratory Data Analysis in the Context of Data Mining And
    International Journal of Psychological Research ISSN: 2011-2084 [email protected] Universidad de San Buenaventura Colombia Ho Yu, Chong Exploratory data analysis in the context of data mining and resampling. International Journal of Psychological Research, vol. 3, núm. 1, 2010, pp. 9-22 Universidad de San Buenaventura Medellín, Colombia Available in: http://www.redalyc.org/articulo.oa?id=299023509014 How to cite Complete issue Scientific Information System More information about this article Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal Journal's homepage in redalyc.org Non-profit academic project, developed under the open access initiative International Journal of Psychological Research, 2010. Vol. 3. No. 1. Chon Ho, Yu. (2010). Exploratory data analysis in the context of data mining and ISSN impresa (printed) 2011-2084 resampling. International Journal of Psychological Research, 3 (1), 9-22. ISSN electronic (electronic) 2011-2079 Exploratory data analysis in the context of data mining and resampling. Análisis de Datos Exploratorio en el contexto de extracción de datos y remuestreo. Chong Ho Yu Arizona State University ABSTRACT Today there are quite a few widespread misconceptions of exploratory data analysis (EDA). One of these misperceptions is that EDA is said to be opposed to statistical modeling. Actually, the essence of EDA is not about putting aside all modeling and preconceptions; rather, researchers are urged not to start the analysis with a strong preconception only, and thus modeling is still legitimate in EDA. In addition, the nature of EDA has been changing due to the emergence of new methods and convergence between EDA and other methodologies, such as data mining and resampling.
    [Show full text]