Al- Azhar University - Gaza Deanship of Postgraduate Studies Faculty of Economics and Administrative Sciences Department of Applied Statistics

On Discordance Tests for the Wrapped Cauchy

Distribution

حول اختبارات التنافر لتوزيع كوشي الملفوف

By Moneb Mostafa Kulab

Supervised by

Dr. Ali H. Abu Zaid Dr. Mo'omen M. R. El-hanjouri Assistant Professor of Statistics Assistant Professor of Statistics Al Azhar University –Gaza Al Azhar University –Gaza

A Thesis Submitted in Partial Fulfillment of Requirements for the Degree of M.Sc. of Applied Statistics

December 2014

On Discordance Tests for the Wrapped Cauchy

Distribution

حول اختبارات التنافر لتوزيع كوشي الملفوف

مَاْ الْفَضْلُ إِال ألهْلِ الْعِلْمِ إِنَّهُمُ ... عَلَى اهلُدَى ملن اسْتَهْدَىْ أَدِالءُ

وَقِيْمَةُ املَرْءِ مَاْ قَدْ كَاْنَ حيسِنُهُ ... وَاجلَاهِلُون َألهْلِ العِلْمِ أَعْدَاْءُ

فَقُمْ بِعِلْمٍ وَال تَطْلُبْ بِهِ بَدَالً ... فَالنَاسُ مَوْتَى وَأَهْلُ العِلْمِ أَحْيَاْءُ

علي بن أبي طالب

رضي اهلل عنه DECLARATION

I certify that this thesis is submitted for the Master degree as the result of my own research, except where otherwise acknowledged, and that this thesis (or any part of the same) has not been submitted for a higher degree to any university or institution.

Signed ......

Moneb Mostafa Kulab

Date: ------

ABSTRACT

Circular data arise quite frequently in many natural and physical sciences. Standard statistical techniques can't be used to analyze circular data due to the circular geometry of the sample space. Circular data as any other types of data are subjected to contamination with some unexpected observations which are known outliers.

This study focuses on detecting outliers in the circular data which follow the wrapped

Cauchy distribution, which results from wrapping the . Four tests of discordancy for circular data are reviewed and extended to the wrapped Cauchy distribution.

The cut-off points of the four tests are obtained via simulation study at three levels of percentiles. The power of performance of discordancy tests in wrapped Cauchy distribution is examined based on three performances measures.

The results show that the power of performance is an increasing function of the level of contamination and concentration parameter. An inverse relationship is noticed between the power of performance and the sample size for the four considered statistics except C statistic.

In General also we discerned that both C and A statistics perform compatibility better than other two statistics.

For illustration purposes, we consider two real circular data sets, namely, the ants’ direction data set and the wind direction data set.

i

الملخص

تظهر البيانات الدائرية بشكل متكرر في العديد من العلوم الطبيعية والفيزيائية. ال يمكن استخدام

الطرق اإلحصائية التقليدية لتحليل البيانات الدائرية نظراً لفضاء معاينتها الدائري. و كأي نوع أخر من

أنواع البيانات فإن البيانات الدائرية عرضة للتلويث ببعض القراءات غير المتوقعة والتي تعرف بالقراءات

الشاذة.

تركز هذه الدراسة على اكتشاف القراءات الشاذة في البيانات الدائرية التي تتبع توزيع كوشي

الملفوف والذي ينتج عن طي توزيع كوشي االحتمالي، حيث تم استعراض أربعة من اختبارات التنافر في

البيانات الدائرية ومن ثم تعميمها على توزيع كوشي الملفوف.

باستخدام دراسة المحاكاة أوجدت القيم الجدولية لالختبارات االربعة عند ثالث مستويات مئوية، كما

تم فحص قوة أداءها باستخدام ثالثة مقاييس لقوة اختبارات التنافر.

أظهرت النتائج أن قوة االداء ارتبطت بدالة تزايديه مع مستوى تلويث البيانات وكذلك مستوي

تركيزها. كما تم مالحظة وجود عالقة عكسية بين قوة األداء وحجم العينة في احصائيات التنافر األربعة

ما عدا إحصائية C. بشكل عام فقد تبين أيضا أن االختبارين C و A قدموا أداءا ً أفضل من االختبارين

اآلخرين .

تم تطبيق االختبارات على مجموعتين من البيانات الدائرية الحقيقية وهي مجموعة بيانات اتجاه

النمل ومجموعة بيانات اتجاه الرياح بغرض التوضيح.

ii

DEDICATION

To my parents

To my brothers and sisters

To my friends

I dedicate this work

iii

ACKNOWLEDGMENTS

Thanks to Allah the compassionate the merciful for giving me patience and strength to accomplish this research.

I would like to express my sincere gratitude to my advisory committee: Assistant

Professor Dr. Ali Abu Zaid and Assistant Professor Dr. Mo'omen El-hanjouri for their guidance, constructive advice, spent many hours discussing and giving advice during the process and for their support not only as mentor but also as good friends.

My gratitude also goes to my graduate committee members: Associate Professor Dr.

Mahmoud Okasha, Associate Professor of Statistics Dr. Abdalla. El-Habeel, Assistant

Professor Dr. Mo'omen El-hanjouri and Assistant Professor Dr. Shadi Al-Tilbany, I would not have been able to achieve my learning in the same manner without their immense knowledge.

My deeply felt thanks go to my parents, brothers and sisters for their encouragement during the study, my full respect, love and appreciation for all of you.

iv

TABLE OF CONTENTS

Subject Page No

LIST OF TABLES …………………………………………………………………. viii

LIST OF FIGURES ………………………………………………………………... Ix

ABBREVIATIONS AND SYMPOLS ……………………………………………… X

CHAPTER ONE: INTRODUCTION

1.1 Background and study……….…………………………………………………... 1 1.2 Summary Statistics of Circular Data …………………...……………………….. 4

1.2.1 Measures of location ……………………………………………………… 4 1.2.2 Measures of concentration and dispersion ………………………………... 6

1.3 Circular ……………………………………………….. 7

1.4 The Problem of Outliers ………………………………………………………… 9

1.5 Problem statement ………………………………………………………………. 9

1.6 Objectives ………………………………………………………………………. 10

1.7 Methodology ……………………………………………………………………. 10

1.8 Thesis Outline …………………………………………………………………... 11

CHAPTER TWO: LITRATURE REVIEW

2.1 Introduction……………………………………………………………………. 12

2.2 Outliers in Linear Data ……………………………………………………….. 13

2.2.1 Outliers in Univariate Linear Data …………………………………...... 14

2.2.2 Outliers in Multivariate Linear Data …………………………………... 16

2.3 Outliers in Circular Data ……………………………………………………... 17

2.3.1 Tests of Discordancy in Univariate Circular Data …………………….. 18

2.3.2 Outliers in multivariate circular data ………………………………….. 24

2.4 Wrapped Cauchy Distribution ……………………………………………….. 25

2.4.1 Parameters Estimates …………………………………………………… 26

v

2.4.2 Characteristics of the Wrapped Cauchy Distribution …………………….. 27

2.4.3 Applications of the Wrapped Cauchy Distribution ……………………….. 29

2.5 Summary……………………………………………………………………….... 30

CHAPTER THREE: NUMERICAL STUDY

3.1 Introduction……………………………………………………………………… 31

3.2 The cut-off points for discordancy tests ………………………………………… 31

3.3 Power of performance ……………………………………………………...…… 35

3.4 Summary ………………………………………………………………………... 40

CHAPTER FOUR: APPLICATIONS TO REAL DATA SETS

4.1 Introduction……………………………………………………………………… 42

4.2 Ants' Direction Data …………………………………………………………...... 42

4.2.1 Data description …………………………………………………………... 43

4.2.2 Identification of outliers …………………………………………………... 44

4.3 Wind Data ………………………………………………………………………. 45

4.3.1 Data description …………………………………………………………... 47

4.3.2 Detection of outliers ……………………………………………………..... 49

4.4 Discussion ………………………………………………………………………. 50

CHAPTER FIVE: CONCLUSIONS

5.1 Summary …………………..……………………………………………...…….. 51

5.2 Main conclusions ……………………………………………………………….. 51

5.3 Further researches.……………...……………………………………….………. 52

REFFERENCES ….…………………………………………………………….. 53

APPENDICES

APPENDIX A.1: The Cut-off points for the tests of discordancy …………………… 58

vi

APPENDIX A.2: R subroutine tests of discordancy in circular data ………………… 62

APPENDIX A.3: R subroutine for obtaining the Cut-off points for the tests of 64 discordancy …………………………………………………………………………….

APPENDIX A.4: R subroutine for power of performance in circular data …………... 65

APPENDIX A.5: Power of Performance of Discordancy Tests ……………………… 67

vii

LIST OF TABLES

Table Page No

Table 4.1: Descriptive statistics of the ants' direction data ……………………………. 44

Table 4.2: Results of discordancy tests on ants' direction data ……………………….. 45

Table 4.3: Descriptive statistics of circular error of the wind data ………………………. 48

Table 4.4: Results of discordancy tests on wind data ……………………………………. 50

viii

LIST OF FIGURES

Figure Page No

Figure 1.1: Arithmetic and circular mean ...……………………………………………... 1

Figure 2.1: A replot for the star data …………………………………………………….. 16

Figure 2.2: Graphical presentation of data in (2.1) ……………………………………… 18

Figure 2.3: Circular boxplot of the frogs directions ……………………………………... 24

Figure 2.4: Circular plot for WC distribution with different concentration parameter  ... 28

Figure 3.1: The cut-off 150 points for C statistic for different value of concentration 32 parameter ………………………………………………………………………………….

Figure 3.2: The cut-off points for D statistic for different values of sample size ….…..... 33

Figure 3.3: The cut-off points for M statistic for different values of sample size ……….. 34

Figure 3.4: The cut-off points for A statistic for different cases ……………………….. 35

Figure 3.5: Power of performance for all statistics when n  50 ……………………...… 37

Figure 3.6: Power of performance for all statistics when   0.9……………………….. 38

Figure 3.7: Relative performance of discordancy tests ………………………………….. 39

Figure 3.8: The difference between P1 and P3 in some cases …………………………... 40

Figure 4.1: Circular plot of the ants' direction data ……………………………………… 43

Figure 4.2: Circular distance between the observed and predicted values ………………. 47

Figure 4.3: Circular plot of circular error of the wind data ……………………………... 48

ix

ABBREVIATIONS AND SYMBOLS

Abbreviation Full Word

R Mean resultant length.

 Sample mean direction.  Sample median direction.

V Sample circular variance. v Sample circular standard deviation.  Concentration parameter.

WC(,) The wrapped Cauchy distribution with mean direction  and concentration parameter  .

 Contamination level.

P1 P11  be the power function where  is the probability of type-II error.

P3 The probability that the contaminant point is an extreme point and is identified as discordant.

P5 The probability that the contaminant point is identified as discordant given that it is an extreme point. d() The summation of circular distances from an observation to all observations

VM (,) with mean  and concentration parameter  . The summation of all circular distances of the point of interest  to all Dr r other points i .

CIQR The circular interquartile range

A A test of discordancy. C C test of discordancy.

D D test of discordancy. M M test of discordancy. MVE The minimum volume ellipsoid method R Resultant length.

x

CHAPTER ONE

INTRODUCTION

1.1 Background of the study

Circular data are a large class of directional data, the fundamental property of circular data is that the beginning and the end of the scale are coincide: in other words, 0  360 .

Circular data refer to a set of observations measured by angles and distributed within (0,2 ] radians or (0 ,360 ], and it can be presented on the circumference of a unit circle.

The analysis of circular data needs special statistical measures rather than the conventional linear techniques, where the straight applying of the common linear methods led us to paradoxes. For illustration, let us consider a simple problem, suppose that two birds flew at 20 and 340 as illustrated in Figure 1.1, then the arithmetic mean is 180 , which is illogical as the mean in the opposite direction of the two directions, however, the mean direction of the two angles has to be 0 . Thus, special statistical methods are needed to analyze circular data with taking into consideration the structure of circular sample space.

 20

  180 0

340

Figure 1.1: Arithmetic and circular mean

1

Circular data can be found whenever periodic phenomena occur; it is the source of interest to scientists in many fields, including:

 Biology: discussions of various investigations in the field of bird navigation are given

in Schmidt-Koenig (1965), Batschelet (1981) and spawning times of a particular fish

by Lund (1999).

 Meteorology: wind and wave directions provide a natural source of circular data

(Johnson and Wehrly, 1977; Hussin, et al. 2004 and Gatto and Jammalamadaka, 2007).

Other circular data arising in meteorology include the number of times a day at which

thunderstorms occur and the frequencies of heavy rain in a year (Mardia and Jupp,

2000).

 Physics: a set of circular data which led to the introduction of one of the basic

distributions in circular statistics consists of the fractional parts of atomic weights (Von

Mises, 1918), The distribution of the resultant length of a random sample of unit

vectors arises in representations of sound waves (Rayleigh, 1919), source of signals in

the case of airplane crashes (Lenth, 1981).

 Psychology: directional data appear in the perception of' direction under several

conditions, such as simulated zero gravity (Ross et al. 1969), also Circular data occur

in studies of the mental maps which people use to represent their surroundings

(Gordon, et al. 1989).

 Image analysis: circular data occur in machine vision, to transformed version of cross-

ratios of sets of four collinear points (Mardia, et al. 1996), Axial data occur in the

orientation of textures (Blake and Marinos, 1990).

 Medicine: the deaths due to the disease at various time of year provide circular data,

the angle of knee flexion as a measure of recovery of orthopedic patients

(Jammalamadaka, et al. 1986).

2

 Astronomy: Several hypotheses have been considered about the distribution on the

celestial sphere of various astronomical objects. Such as Polya (1919) enquired any

stars are distributed uniformly over the celestial sphere, and uniformity of visual binary

stars has been considered by (Jupp, 1995).

 Social Sciences: the modeling of the casualties in the second Iraq war and suicide

cases in Switzerland (Gill and Hangartner, 2010), the analysis of mother’s day

celebration (Abuzaid, 2012).

 Earth Sciences: since the surface of the earth is approximately a sphere, spherical data

arise readily in the earth sciences such as the point on the earth's surface vertically

above the origin of the earthquake, a wild-ranging account of in

earth science was given by Watson (1970). The direction of earthquake displacement

in terms of the direction steepest decent considered by Rivest (1997).

Fisher (1993) mentioned that the early beginnings of circular data went back to 1767, when the Astronomer Reverend John Mitchell FRS analyzed the angular separation between stars. John Playfair in 1802 recommended the use of the resultant vector method in averaging directions.

In 1950s, there was a quantum leap in the evolution of statistical method for analyzing circular data, when Waston and Williams (1956) published a paper introduced methods for statistical inference about the mean direction and dispersion for samples from a von Mises distribution, and methods for comparing two or more samples. Since then, there were many books and articles focused on the analysis of circular data has been published. Mardia (1972) wrote a comprehensive book on circular data and followed by Batschelet (1981) who wrote a book about circular statistics in Biology. In 1989 Jupp & Mardia published statistical review

3

paper concerning the directional data that summarized the developments of circular data analysis over the years.

At present, the attention of statisticians and researchers for the analysis of circular data was increased significantly due to the availability of solid foundation theory and the accessibility to this kind of data. Recently Kato and Jones (2013) introduced a four- parameter extended family of distributions related to the wrapped Cauchy distribution on the circle. Chang-Chien et al. (2012) wrote on mean shift-based clustering for circular data.

Abuzaid et al. (2009) proposed the A statistic based on the summation of the circular distances from the point of interest to all other points, while Abuzaid et al. (2008) used numerical and graphical tools to detect outliers in a circular regression model. Siew et al.

(2008), Gatto & Jammalamadaka (2007) proposed new circular distributions. Kato et al.

(2008) wrote papers that interest on circular regression model.

1.2 Summary Statistics of Circular Data

In any statistical analysis for circular data, we need some measures of location and dispersion, for example: mean direction, variance, etc. Suppose that we are given unit

vectors x1,..., xn with corresponding angles i ,i 1,...,n that are observations in a random circular sample of size n from a circular population. We will explain the descriptive measures as follows:

1.2.1 Measures of location

a- The mean direction

The mean direction or the preferred direction, is obtained by treating the data as unit

vectors and using the direction of their resultant vector, R  C2  S 2 , where

4

n n C  cosi and S  sini . Therefore, the mean direction,  can be obtained by i1 i1

C S solving the equations, cos  and sin  as follows: R R

tan1 S /C, if S  0,C  0,   , if S  0,C  0,  2   tan1 S /C , if C  0,  1 tan S /C 2 , if S  0,C  0,  undefined, if S  0,C  0.

It's notable that the mean direction of circular data has a property

n sin(i  )  0 which is analogues to the linear case. i1

b- The median direction ~ Mardia (1972) defined the sample median direction  of angles 1,...,n as any angel  on the circumference of the circle that satisfies the following two properties:

(a) The diameter through  which divides the circle into semi-circles, each with

an equal number of observed data points.

(b) The majority of the observed data are closer to than to the anti-median

  .

The circular median of a unimodal distribution is unique. Fisher and Powell (1989) defined the circular median more formally as the angle about which the sum of absolute angular deviations is minimized.

Fisher (1993) defined the median direction of circular variable as an axis (median axis) that divides the circular data into two groups as the same number of observations.

5

For the odd sample size, the median direction passes through a data point, while for even

sample size, we take the midpoint between two middle points. Furthermore, Fisher (1993)

defined the median direction as the observation  which minimizes the summation of

circular distances to all observations.

1 n d()       i  for i 1,...,n. n i1

1.2.2 Measures of concentration and dispersion

a- The mean resultant length

The mean resultant length R is the average length of the random vectors on the unit

circle. It is used to measure the concentration of unimodal circular distributions, its

R defined as R  , where n is the sample size. n

The mean resultant length lies in range (0,1) and the resultant length R lies in

range (0,n) .When R 1, it implies that all directions in the data set are almost similar,

or the observations has a small dispersion and more concentrated towards the mean

direction. However, R  0 does not imply uniform dispersion around the circle: for

example, any data set of the form 1,...,n,1 ,...,n  has R  0 .

b- The sample circular variance

The sample circular variance is defined by the quantity, V 1 R , where 0 V 1.

The smaller values of circular variance resulting from a more concentrated sample, that

means for the smallest value of circular variance V  0 or (R 1) , then all the

observations in a given sample are occurring at precisely the same location, and for the

6

data that distributed uniformly around the circle, the possible variation be in natural upper

limit, V 1 or R  0 .

c- The sample circular standard deviation

The sample circular standard deviation is defined by v   2log(1V) with

0  v   , where 0 V 1 is the sample circular variance.

The reason for defining the circular standard deviation in this way, rather than as the

square root of the sample circular variance by analogy with the linear standard deviation

is to obtain some reasonable approximations for proportion of von Mises distribution,

provided that the distribution is not too dispersed (see Fisher 1993, p.54).

Fisher (1993) introduced a good approximation to the sample circular standard

1 deviation defined as v  (2V )2 for small circular variance V .

1.3 Circular Probability Distributions

A is a probability distribution whose total probability is concentrated on the circumference of a unit circle. There are many circular probability distributions, throughout our study we will consider only two distributions namely, the von

Mises and the wrapped Cauchy distributions.

a- The von Mises distribution

The von Mises distribution is the most common symmetrical unimodel circular

probability distribution, which known also as the circular , its pdf is

given by:

1 f ; ,   exp cos(  ), 0 ,   2 and   0, 2 I 0 ()

7

1 2 I ()  exp  cos(  ) d where,  is the concentration parameter and 0    is the 2 0 modified Bessel function of the first kind and order zero.

Best and Fisher (1981) gave the maximum likelihood estimate of the concentration parameter  as follows:

 5 2R  R 3  R 5 , if R  0.53,  6  0.43 ˆ   0.4 1.39R  , if 0.53  R  0.85,  (1 R) (R 3  4R 2  3R)1, if R  0.85.  where R is the mean resultant length. b- The wrapped Cauchy distribution

The wrapped Cauchy distribution is obtained by wrapping the Cauchy distribution on the real line with the density

1  f x;,   ,    x,  ,   0   2  (x  )2 around the circle. Thus, the pdf of the wrapped Cauchy distribution is given by:

1 1  2 g; ,   , 0   2, 2 1  2  2 cos(  ) where   e is the concentration parameter.

The wrapped Cauchy distribution is unimodel and symmetric, it enjoys the additive property and the central limit theorem. More details of the wrapped Cauchy distribution are presented in Section 2.4. 8

1.4 The Problem of Outliers

Outlier in the context of circular data would be defined as a set of observations which is inconsistent with the rest of the sample. It is expected to lay far from the mean direction of the circular sample. Circular data as any other types of data are subjected to contamination with outliers.

The problem of outliers in circular data has not received enough attention. There are only a few tests of discordancy in circular samples. Earlier Mardia (1975) suggested one of them, and Collett (1980) proposed three of them. An alternative test based on Bayesian approach was suggested by Bagchi and Guttman (1990), Recently, Abuzaid et al. (2009) proposed a new test which has been shown to perform better than the other tests for data that follow the von Mises distributions, expect for small sample size. Rambli et al. (2012) applied four discordance tests based on M, C, D and A statistics to detect outliers in the circular data which follow the wrapped normal distribution.

1.5 Problem Statement

Similar to other data sets, circular data is subjected to contamination with outlying observations. The existence of outliers in any data set affect the estimation of parameters.

Few tests of discordancy are proposed for von Mises distribution and extended for wrapped normal distribution. There is no published work found on discordancy in wrapped Cauchy distribution.

In this research we are going to extend four tests of discordancy for the wrapped

Cauchy distribution. The cut-off points and the power of performance will be investigated via extensive simulation study. The method will be illustrated on two real data sets.

9

1.6 Objectives

Based on the statement of problem above, the researcher has outlined the following objectives for this study:

1- to highlight the effect of outliers in circular samples.

2- to extend the tests of discordancy for the wrapped Cauchy distribution.

3- to obtain the cut-off point for discordancy tests in wrapped Cauchy distribution.

4- to investigate the performance of discordancy tests in wrapped Cauchy distribution.

5- to apply the tests on ants data set and wind data set to detect possible outliers.

1.7 Methodology

The researcher extends four tests of discordancy for the wrapped Cauchy distribution through obtaining the cut-off points and investigating their performances based on an extensive simulation studies, as briefly presented below:

1- Researcher generate random circular samples from the wrapped Cauchy distribution

with different sizes 5  n 150, associated with different values of the

concentration parameter, 0.1  1.

2- The C,D,M and A tests are obtained for each generated sample, in order to obtain

the cut-off points at three levels of significance ( = 0.1, 0.05 and 0.01). The

process is repeated till the convergence criterion is satisfied.

3- The power of tests is examined via various measures of discordancy tests

performance.

4- Researcher has written up the required subroutines by using R language.

5- The tests are applied on two real data sets for illustration purposes.

11

1.8 Thesis Outline

This research attempts to handle the problem of outliers in circular data that follow wrapped Cauchy distribution. The research is outlined as follows:

Chapter one: introduces the definition and evolution of circular data over the years. It describes the descriptive measures for circular data, and highlights the problem of outliers in circular data followed by a brief presentation of two circular probability distributions. The objectives and methodology of this research are listed.

Chapter two: provides a literature review on outliers in linear and circular data includes the popular detection methods for linear and circular samples. It presents the wrapped Cauchy distribution and its main characteristics, as well as parameters estimates and the effect of the parameters on the shape of data.

Chapter three: extends four discordancy tests to the wrapped Cauchy distribution, the cut- off points for tests are obtained and described, moreover, the power of performances is investigated via simulation studies.

Chapter four: presents and analyzes two real data sets for illustration purposes.

Chapter five: lists the conclusions derived by the researcher based on the results of this study, and presents possible recommendations for future research.

A comprehensive list of references and the R- subroutines written by the researcher as well as the results of the simulation study including the cut-off points and the power of performance for the four considered tests are attached at the end of this thesis.

11

CHAPTER TWO

LITERATURE REVIEW

2.1 Introduction

During the process of data analysis, it is often to find out some values that are far away from the main group of data (is much smaller or larger). Such values have significant influence on the central tendency and variation measures as well as parameters’ estimates.

Those unexpected values are known as outliers.

The existence of outliers in any statistical analysis consider as one of the most common problems. Detection and treatment of outliers received big attention; Bernoulli (1777) was one of the first researchers who discussed outliers where he questioned the assumption of identically distributed error. Peirce (1852) was the pioneer to develop an objective statistical method to deal with outliers. Wright (1884) suggested that any value greater than 3.37 times the standard deviation is to be considered as an outlier.

Due to multitude literature on the problem of outliers, there are many different definitions of outliers. Hawkins (1980) described an outlier as an observation that “deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism”. Jarrell (1994) defined outliers by observations that have extreme value. Moore and McCabe (1999) defined an outlier as an observation that lies outside the overall pattern of a distribution.

He, et al (2002) defined semantic outlier as the data point, which behaves differently with other data points in the same class.

12

Therefore, we may conclude that most authors agreed on that outliers are the values appear to deviate markedly from the remaining values will be outlier.

The rest of this chapter is organized as follows: Section 2.2 reviews the causes, definitions and some of the popular tests of outliers in linear data; Section 2.3 explains the differences between outliers in linear and circular data, and presents the available tests of discordancy in univarite circular samples. Special attention is given to the literature on the problem of outliers in different types of circular data. Section 2.4 introduces the probability density of the wrapped Cauchy distribution and its parameters estimates, and summarizes its main characteristics.

2.2 Outliers in Linear Data

The need of screening data from outliers is important in these days of ubiquitous computing, where any statistical test is derived based on sample mean and variance can be distorted with presence outliers. Dan (2000) summarized problematic effects of outliers as following

i. Bias or distortion of estimates.

ii. Inflated sum of squares (which make it unlikely to partition the source of variation in

the data into meaningful components). iii. Distortion of p-values (statistical significance, or lack thereof can be due to the

presence of a few-or even one-unusual data value). iv. Faulty conclusions (it’s quite possible to draw false conclusions if you haven’t looked

for indications that, there was anything unusual in the data).

13

2.2.1 Outliers in Univariate Linear Data

There are several tests for detecting outliers in univariate data (see Barnett and Lewis,

1984), some of them are given below

a- Dixon Test

The Dixon’s test (Q-test) is based a value being too large (or small) compared to its

nearest neighbor. It was developed by Dixon (1950, 1951), The n values comprising the

set of arranged observations in ascending order: x1  x2  .....  xn , we can defined Q as

Gap Q  , where Gap is the absolute value between suspect value and the closest Range

number to it, and Range is the absolute difference between the maximum and minimum

values.

TheQ statistic for the highest (Qn ) and the lowest (Q1 ) values are given as follows

x(n)  x(n1) x(2)  x(1) Qn  and Q1  . x(n)  x(1) x(n)  x(1)

The obtained Q value is compared to critical values tabulated in Murdoch and

Barnes (1998, pp.27).

b- Boxplot and Whiskers

Boxplot is considered as an excellent tool in the exploratory data analysis. It was

firstly introduced in Tukey (1977) and further investigated by Chambers, et al. (1983).

Boxplot formally consists of five-number summaries: the smallest observation, lower

quartile , median , upper quartile Q , and the largest observation. The main Q1 Q2 3

application of boxplot is to detect outliers in a linear data set based on 1.5 IQ boxplot

14

criterion, where IQ is the interquartile range (the difference between the upper and lower

quartile IQ  Q3  Q1 .

In other words, any observation below LF  Q1 1.5 IQ or above

U F  Q3 1.5 IQ is described as an outlier, where LF and U F are called the lower and upper fences, respectively. Many developments are proposed on the boxplot criterion (see

Ingelfinger et al. 1983 and Sim et al. 2005). c- The ‘three – sigma’ Rule

From statistical experiences, statisticians have to rule stating that, for many reasonably symmetric unimodal distributions, almost all of the population lies within three standard deviations of the mean. For the normal distribution about 99.7% of the population lies within three standard deviations of the mean. In general, under normality assumptions, if the distance between an observation x and the sample mean x is greater i

n 2 (xi  x) than 3 s can be considered as an outlier, where s   is the sample standard i1 n 1 deviation and n is the sample size. (see Tebbs 2006). d- Least Absolute Deviation

A least absolute deviation (LAD) method for the determination of the number of upper or lower outliers in normal sample by minimizing its sample mean absolute deviation. It was proposed by Wu and Lee (2006), it is a mathematical optimization technique that attempts to find a function which closely approximates a set of data. The method minimizes the sum of absolute errors (SAE) (the sum of the absolute values of the vertical "residuals" between points generated by the function and corresponding points in the data).

15

2.2.2 Outliers in Multivariate Linear Data

There are many methods have been proposed to detect outliers in univariate data. For the multivariate data special tests are needed to detect such points rather than detect them based on treating each marginal variables separately. Following we summarize two methods to detect outliers in multivariate data.

a- A Relplot

Goldberg and Iglewicz (1992) derived a relplot from a bivariate generalization of

the boxplot, the relplot consists of two ellipses, the points outside the larger ellipse are

considered to be outliers.

As an example, Roussseeuw and Leroy (1987) reported the logarithm of the

effective temperature x at the surface of 47 stars versus the logarithm of its light intensity

y . Figure 2.1 illustrates the replot for the star data. The smaller ellipse contains half of

the data and the points outside the larger ellipse are considered as outliers.

Figure 2.1: A replot for the star data.

16

b- The Minimum Volume Ellipsoid Method (MVE)

It is the one of the earliest methods is based on the minimum volume ellipsoid

(MVE) estimators of location and scale (see Rousseeuw and van Zomeren, 1990),

Lopuhaa (1999) reported relevant theoretical results, if we have multivariate data with p

dimension, let the column vector C having length p , be the MVE estimate of location,

and let the p p matrix M be the corresponding measure of scatter, the distance of the

point xi  (xi1,...... , xip ) from C is given by

1 disi  (xi C)M (xi C), i 1,..., p .

If the square root of the 0.975 quantile of a Chi-square distribution with p degrees of

freedom dis   2 , then x is considered as an outlier. Rousseeuw and van Zomeren i .975, p i

(1990) recommended this method when there are at least five observations per dimension,

n meaning that  5 . p

2.3 Outliers in Circular Data

Circular data as any other types of data are subjected to contamination with some unexpected values. Outliers in circular data can be defined as a set of observations which is inconsistent with the rest of the sample. It is expected to lay far from the preferred direction of the circular sample.

In order to explain the differences between outliers in linear and circular cases, let us consider the following data set (unit in degree):

10, 310, 320, 330, 340, 350, 360. (2.1)

17

Figure 2.2 presents the data in two different forms, (a) on a line while (b) on the circumference of a circle. Based on Figure 2.2 we can conclude that the observation with value 10 is an outlier if we treat the data as linear data, but if we treat it as circular data then the observation with value 10 looks close and consistent with the rest of the observations.

The previous example has shown the necessity of special tests and methods to detect outliers in the circular samples as presented in 2.3.1

(a) (b)

Figure 2.2: Graphical presentation of data in (2.1)

It is obvious that the mean direction will be affected by the existence of outliers.

Furthermore, Collett (1980) explained the effect of exclusion any observation  r from a circular sample on the resultant length R  S 2  C 2 .

2.3.1 Tests of Discordancy in Univariate Circular Data

There are few numerical and graphical tests of discordancy in circular samples. Mardia

(1975) proposed the first test of discordancy in circular data, Collett (1980) suggested another three numerical tests of discordancy and compare their performances with Mardia’s test. Alternatively, Bagchi and Guttman (1990) suggested another test based on Bayesian

18

approach. Recently, Abuzaid, et al. (2009) proposed a test of discordancy based on the circular distance between the sample observations.

A brief revision of five tests ( that were called the M,C,D,L and A ) tests of discordancy as well as the circular boxplot for the univariate circular samples are given in this subsection.

The null hypothesis is that there is no outlier in the circular sample, where the samples are derived randomly from the von Mises distribution VM (,) with mean  and concentration parameter  .

a- M statistic

The first discordancy test in circular data was proposed by Mardia (1975) and given

in the following formulation

n 1 R(i)  M   min i   ,  n  R 

where R is the resultant length and R(i) is the resultant length by omitting the ith

observation. Collett (1980) reformulated the M statistic in terms of:

R(i)  R 1 R  R 1 M 1 M   max   r , (2.2) i  n  R  n  R

where Rr  max i R(i) . He pronounced the asymptotic distribution of the M statistic for

large values of the concentration parameter  as follows: As the value of  increases,

the von Mises distribution will be approximated by a standard normal distribution. On the

* 2   n(b ) *  xi  x  other hand, the M statistic can be approximated by , where b  max  i 2 n 1 (xi  x)   i 

, i 1,...,n is the test statistic used to identify discordancy in normal data. Percentage

points for b* are given in Pearson and Hartley (1966).

19

b- C statistic

The C test statistic is proposed by Collett (1980), and given by

R(i)  R  C  max , (2.3) i  R 

R R(i) where R  is the mean resultant length of circular data set, and R(i)  is the mean n n 1 resultant length by omitting the ith observation.

The value of C will then be compared with cut-off points for the relevant sample size and estimated concentration parameter. The null hypothesis will be rejected if C is greater than the cut-off point, and then we can consider the ith observation as an outlier.

c- D statistic

The D statistic is derived based on the relative arc lengths between the ordered

observations of a circular sample 1,...,n . LetTi be the arc length between consecutive observations given by T   , i 1,...,n 1 and T  2   . i (i1) i n (n) 1

T T Define i , i 1,...,n and T  T . Let r corresponds to the greatest arc Di  0 n Dr  Ti1 Tr1 containing a single observation  . The D is two tailed statistic, therefore, Collett (1980) r r

1 suggested the consideration of the minimum value of Dr and its inverse Dr

1 D  min(Dr , Dr ), (2.4)

where 0  D 1. The observation r is considered as an outlier if the value of D is larger than the associated cut-off point given in Collett (1980).

21

d- L statistic

The L test is based on the maximum likelihood ratio statistic for the alternative hypothesis where there is an outlier. The L statistic is given by    I0 ((r) ) L  (R(r) 1)(r) R  nln  ,  I0 () 

2 2 2 where R(r)  C(r)  S(r) , C(r) and S(r) are the values of C and S , respectively, based  on n 1 observations excluding  ;  is the maximum likelihood estimate of  based r  on n observations and  (r) is the maximum likelihood estimate of  based on n 1

observations exclusive of  r .

e- A statistic

Recently, Abuzaid, et al. (2009) proposed a new test based on the summation of all

circular distances Dr of the point of interest  r to all other points i , i 1,...,n where

n Dr  1 cosi r , i,r 1,...,n i1

If the observation  is an outlier then the value of D will significantly increase. r r D Thus, the average circular distance is given by r , and it can be used to identify n 1 possible outliers in the circular sample and the proposed statistic is given by

 D  A  max r ,r 1,...,n, (2.5) r 2 n 1   

The average circular distance is divided by 2 in order to standardize the values of statistic A , where A0,1. The proposed statistic is based on the relative decrease in the

summation of circular distances by omitting the point of interest  r .

21

Abuzaid, et al. (2012) discussed on the approximate distribution of the A test of discordancy, they proved that for any sample from von Mises distribution with mean

direction  and large concentration parameter  then, for any i and  r , i,r 1,...,n and i  r .

d 2 dir  1 cosi r 1 and

2 ir  d 2 Bir  2 sin    1 ,  2 

where ir 0,  is the circular distance between i and r , and it is given by

2 ir      i r , and 1 is the chi-square distribution with one degree of freedom.

n But unfortunately, the statistic dir does not follow Chi-squares distribution with (n-1) i1

degrees of freedom due to the absence of independency. Then based on the counts of ˆdir

2 which exceeds the values of  ,1 at  level of significance, for each observation , r 1,..., n from other observations  , i 1,..., n and i  r , they estimated the cut-off i points via Monte Carlo simulation study.

The performances of discordancy tests for circular data generated from the von

Mises distribution show that, the A statistic performs slightly better compared to C and

D statistics, and much better than M statistic.

The tests of discordancy are applied on samples generated from the von Mises distribution. Recently, Rambli, et al. (2012) extended the tests to the wrapped normal distribution WN(,) with pdf given by:

1       2k 2  f    exp  2 ,  2 k1  2 

22

where  2 is the circular variance and  is the mean direction. Alternatively, the wrapped normal distribution can be reformulated as follows:

 1  2  f    1 2 k cosk  , 0 ,  2 and 0   1, 2  k1  where  is the concentration parameter. Rambli et al. (2012) concluded that the test based on A statistic outperforms the other tests.

f- Circular Boxplot

Abuzaid, et al. (2012) proposed a boxplot version for a circular data set, called the circular boxplot, it is simpler and more appealing compared to the other outlier detection techniques described above, circular boxplot formally consists of three-number summaries: the first quartile direction , median direction and the third quartile Q1 Q2 direction Q . The main application of circular boxplot is to detect outliers in a circular 3 data set based on v CIQR boxplot criterion, where CIQR is the circular interquartile

range that is obtained by the following formula CIQR  2 Q3 Q1 , and w is the resistant constant. To detect outlier in a circular sample any observation below

L  Q  wCIQR U  Q  wCIQR CF 1 or above CF 3 is described as an outlier, where LCF

and UCF are called the lower and upper circular fences. The circular boxplot performs better when both the concentration parameter and the sample size are large.

As an example, Figure 2.3 shows the circular boxplot with w 1.5 for 14 observations were recorded when Ferguson et al. (1967) conducted an experiment to investigate the homing ability of a species of frogs. The plot managed to detect one observation as an outlier

23

Figure 2.3: Circular boxplot of the frogs directions, w 1.5 (see Abuzaid, et al. 2012)

The pioneer study on the outliers in circular time series models has been recently

published by Abuzaid et al. (2014), where a combination of numerical and graphical

procedures are proposed and implemented in environmental data.

2.3.2 Outliers in Multivariate Circular Data

Many contributions to the detection of outliers in different types of circular data are published. Abuzaid, et al. (2008) explored the implementation of the mentioned numerical tests of discordancy on the circular regression models to detect a single outlier based on the circular distance residuals. Furthermore, new tests are developed based on the mean circular distance and COVRATIO statistic to detect possible outliers in the circular regression models (see, Abuzaid, et al 2011; Abuzaid, et al 2013).

Outliers in the functional circular relationship models are also studied and detected

(see Hussin, et al 2010; Hussin and Abuzaid, 2012; Abuzaid, 2013).

24

2.4 Wrapped Cauchy Distribution, WC(,)

A circular random variable  can be obtained from any random variable on the real line X with probability density function g(x) , and distribution functionG(x) by defining

  Xmod2 .

That means wrapping the original distribution on the real line around the circle to get the wrapped distribution. The Cauchy distribution on the real line has the density function

1  gx;,   ,    x,  , 0 ,   2  (x  )2 where  is the mean, and  is the standard deviation.

If we wrapped the g(x; ,) around the circle, then we get to the wrapped Cauchy distribution with a probability density function given by: (see Fisher 1993)

1 1  2 f (;,)  , 0 ,  2, 0   1, (2.6) 2 1  2  cos(  ) where  is the mean direction and   e is the concentration parameter that is called the mean resultant length. Then, the distribution function of the wrapped Cauchy is given by:

(see Fisher 1993)

1  (1  2 )cos(  )  2  F()   , 0 ,  2, 0   1.  2  2  1   2 cos(  ) 

Levy (1939) introduced the wrapped Cauchy distribution and it has been studied by

Wintner (1947). Later on, McCullagh (1996) observed that the wrapped Cauchy distribution

25

can be obtained by mapping Cauchy distribution on to the circle by the transformation x  2tan1 .

2.4.1 Parameters Estimates

The maximum likelihood estimation for the wrapped Cauchy distribution parameters

 and  can be obtained by iterative reweighting algorithm given by Kent and Taylor

2 cos  2 sin  (1988). We first reparametrize this WC density by putting   and   , 1 1  2  2 1  2  obtaining

1 f ;1,2   , 2.c.1 1 cos  2 sin 

1 where c  c ,   . To obtain the likelihood equations it is simpler to 1 2 2 2 1 1  2

introduce another parameterization 1 c.1 and  2 c.2 . Then the wrapped Cauchy pdf is given by:

1 f ;1,2   , 2 11 cos 2 sin  the likelihood function is given by:

n n 1 l;1,2   2  .c 1 cosi 2 sini  , i1 then, the log likelihood function

n ;1,2   nln 2  lnc 1 cosi 2 sini . i1

Differentiating the log likelihood function with respect to 1 and  2 and noting that

2 2 c  11 2 leads to likelihood equations

26

1 n 1 n

wi cosi  i  0, and wi sini  i  0, c i1 c i1 where wi 1 1 1 cosi  2sini  for i 1,2,...,n . These equations can be written to

express 1 and 2 as adaptively weighted averages cosi and sini , respectively; namely,

n n  wi cosi wi sini i1 i1 1  n , 2  n .  wi wi i1 i1

This representation suggests the following iterative reweighting algorithm for

computing the maximum likelihood estimates ˆ1 and ˆ 2 :

1- Start with arbitrary initial values  and  with  2   2  1. 1,0 2,0 , 1,0 2,0

2- Given 1,v and 2,v at iteration v , renewal the values as

n n wi,v cosi  wi,v sini i1 i1 1,v1  n , 2,v1  n wi,v  wi,v i1 i1

where wi,v  1 1,v cosi  2,vsini  for i 1,2,..., n

 3- Repeat step (2) until the algorithm converges, giving 1 and ˆ 2 .

ˆ 1 1 ˆ 2  ˆ 2 4- Calculate ˆ and ˆ by ˆ  arctan 2 , and ˆ  1 2 ˆ 2 2 1 ˆ1  ˆ 2

2.4.2 Characteristics of the Wrapped Cauchy Distribution

For the wrapped Cauchy distribution, Fisher (1993) quantified the dispersion  by a concentration parameter  , as given by

(1  2 )   2 2

27

  0   0.4   0.6

  0.8   0.9   0.95

 1

Figure 2.4: Circular plot for WC distribution with different concentration parameter  .

Figure 2.4 represents 100 random observations generated from WC(,) with   0 and

  0, 0.4, 0.6, 0.8, 0.9, 0.95 and 1. Plots in Figure 2.4 show that as   0, the distribution

converges to the circular uniform distribution U c with probability density function

28

1 f ()  , 0   2 , where the observations tends to cover the circumference of the 2 circle uniformly, and as the concentration parameter increases, the observations tend to concentrated toward the mean direction,  . For  1, all observations are standing on one bar in the direction of the circular mean,  .

One of the main features of the WC, is that has a heavy tail where even for large concentrations (i.e.   0.8) , the circumference still be covered by few observations.

The WC distribution is unimodal and symmetric about  . Mardia and Jupp (2000) illustrated that the WC distribution enjoys the additive property and the central limit theorem,

on the other word, the convolution of the wrapped Cauchy distributions WC(1,1) and

WC(2 ,2 )is the wrapped Cauchy distribution WC(1  2 ,12 ).

2.4.3 Applications of the Wrapped Cauchy Distribution:

The wrapped Cauchy distribution is often found in the field of spectroscopy where it is used to analyze diffraction patterns, Vo and Oraintara (2010) proposed a statistical model that can be beneficial to the image processing community, where they developed a new studying approach of phase difference of two neighboring complex wavelet coefficients called relative phase. They demonstrated that the wrapped Cauchy fit well with real data obtained from various real images including texture images as well as standard images.

Yackulic et al. (2011) developed a compound wrapped Cauchy distribution to characterize latent-state modes. Which helped them to achieve the goal of them to evaluate the efficiency of different model structures in reproducing movement data collected at different temporal resolutions.

29

Also, the wrapped Cauchy distribution is found in the field of animal ecology,

Bartumeus, et al. (2005) assumed random-walk models to understand how animals face such environmental uncertainty. They analyzed the statistical differences between two random- walk models commonly used to fit animal movement data, and they used a wrapped Cauchy distribution for the turning angles.

Gurarie, (2008) used the wrapped Cauchy distribution to construct a simulation study in his thesis that concerning analysis of animal movements.

Recently, the wrapped Cauchy distribution appeared in the field of cosmology when

Szepietowski, et al. (2013) discussed the relation between the phase of the true convergence and count convergence and showed that the phase difference between these fields follows a wrapped Cauchy distribution.

2.5 Summary

The problem of outliers in circular data from other distributions rather than the von

Mises has not received enough attention. We can certainly conclude that there is no literature that discussed this problem for data with the wrapped Cauchy distribution.

The symmetry and other properties of wrapped Cauchy distribution make it an alternative choice for modeling circular data, which is subjected to the existence of outliers.

Therefore, the following chapter extends four discordancy tests to the wrapped Cauchy distribution in order to obtain their cut-off points and examine their performance.

31

CHAPTER THREE

NUMIRICAL STUDY

3.1 Introduction

In this chapter, we will extend four tests of discordancy for the wrapped Cauchy distribution. Section 3.2 describes the numerical procedures used to obtain the cut-off points.

The 90, 95 and 99 percentages of the null distribution of test statistics are obtained based on simulation study by generating random circular samples follow WC(0,) with different sizes, associated with different values of the concentration parameter. Section 3.3 explains the power of performance for each statistic, namely C,D,M and A statistics. Section 3.4 concludes the main findings of the numerical study.

The researcher wrote up all required subroutines by using R package (see Crawely

2012) as given in Appendices (A.2, A.3 and A.4).

3.2 The cut-off points for discordancy tests

In this section, we computed the cut-off point for four test statistics, namely, C,D,M and A by designing a simulation study to obtain the percentage points of the null distribution of free outliers in the generated random circular samples from the wrapped Cauchy distribution, with mean direction zero and concentration parameter  , WC(0,) .

We consider 12 values of the concentration parameter in the range of 0.1 to 0.999 and 20 different sample sizes from 5 to 150 as listed in Appendix A.1 For each generated data set the values of the four considered statistics C,D,M and A are calculated based on the formulas (2.2-2.5), respectively.

31

For each combination of the sample size n and concentration parameter  , the process is repeated 3000 times to ensure the convergence of the desired percentiles (cut-off points). The obtained statistics are sorted in ascending manner and then 10%, 5% and 1% upper percentiles when no outlier presents in the sample are obtained and listed in Appendix

A.1.

We summarize the main features of cut-off points in the following figures by adopting the 5% level of percentiles, the obtained results can be generalized for other levels of percentiles 1% and 10%, where there is an inverse relationship between the cut-off points and the level of percentiles, in other words the cut-off points increases as the probability of type I error decreases. The cut-off points of the C,D,M and A statistics show that:

1.6 ρ=0.2 1.4 ρ=0.7

1.2 ρ=0.95

1

0.8

Cut off points off Cut 0.6

0.4

0.2

0 n 0 30 60 90 120 150

Figure 3.1: The cut-off points for C statistic for different values of concentration parameter

32

i. With respect to C statistic, Figure 3.1 illustrates that for a given sample size n, the cut-

off points is a decreasing function of the concentration parameter  . Furthermore,

Figure 3.1 shows that for any concentration parameter  there is an inverse

relationship between the cut-off points and the sample size.

For small sample sizes (n 10) or for lower values of concentration parameter

(  0.2), the values of cut-off points exceed the value of 1, where the effect of single

observation on the estimated mean resultant length (R(i) ) is augment.

1.0 0.9 0.8

0.7 n=20

n=50 0.6 n=70

0.5 off points points off

- n=150

0.4 Cut 0.3 0.2 0.1 0.0 ρ 0 0.2 0.4 0.6 0.8 1

Figure 3.2: The cut-off points for D statistic for different values of sample size

ii. The cut-off points of D statistic are fluctuating slightly for   0.7 as given in

Appendix A.1.2. Thus, we obtain the mean of cut-off points for   0.7 and the

standard deviation for each considered, where the values of the standard deviations are

found to be less than 0.054 as listed in Appendix A.1.2.

33

Figure 3.2 presents the behavior of D statistic for different values of sample size and

shows that the cut-off points of D statistic are correlated indirectly with either sample

size n or concentration parameter  for   0.7, while it is almost independent of

sample size n or concentration parameter for   0.7.

1 n=10 0.9 n=30 0.8 n=70 0.7

n=150 0.6

0.5

off point off -

0.4 Cut 0.3 0.2 0.1 0 ρ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 3.3: The cut-off points for M statistic for different values of sample size

iii. Figure 3.3 shows that for M statistic, for any sample size, the increase of the

concentration parameter  increases the cut-off of points, while for any concentration

parameter  , the increase the sample size n decreases the cut-off points. iv. Unlike the previous three statistics, Figure 3.4(a) show that the cut-off points of A

statistic keep increasing as the concentration parameter increase up to   0.95, and

then the cut-off points are rapidly approach zero for   0.95 . This behavior may

refer to the uni-modality of the wrapped Cauchy distribution, where the sum of circular

distances from any observation to other observations almost equal zero when the

34

concentration parameter is very large. Furthermore, Figure 3.4(b) illustrates the

behavior the cut-off points for A statistic with respect to the sample size, where the

increase of sample size reflects on the concentration parameters as follows: (1) for

small concentration parameter (  0.6) the cut-off points decreases Gradually. (2) for

 [0.6,0.7] the cut-off points almost constant. (3) for high concentration parameter

(  0.7) the cut-off points increases gradually.

1 1

0.8 0.8 0.6 0.6

off point off ρ=0.2

n=10 point off - 0.4 - 0.4

n=30 ρ=0.7 Cut 0.2 Cut 0.2 n=50 ρ=0.95 0 ρ 0 n 0 0.2 0.4 0.6 0.8 1 0 30 60 90 120 150 (a) (b)

Figure 3.4: The cut-off points for A statistic for different cases

The following section investigates the performance of the four statistics in identifying possible outliers.

3.3 Power of performance

The power of performance of discordancy tests can be assessed via several measures.

David (1970, p.185) and Barnett and Lewis (1984, pp. 64–68) stated that a good test of discordancy should have: (1) a high power function; (2) a high probability of identifying a contaminating value as an outlier when it is in fact an extreme value, where an extreme value is defined as a point with the maximum circular deviation; and (3) a low probability of wrongly identifying a good observation as discordant.

35

Let P1 1  be the power function where  is the probability of Type-II error; P3 denotes the probability that the contaminant point is an extreme point and is identified as discordant; while P5 indicates the probability that the contaminant point is identified as discordant given that it is an extreme point.

For a good discordance test, one expect the test to satisfy the following (i) high P1, (ii) high P5, and (iii) low P1 P3.

To study the performance of the four numerical tests, we use 3000 samples based on different sizes n = 10, 30, 50, 70, 100 and 150, and concentration parameter  =

0.2,0.4,0.6,0.7,0.8,0.9,0.95, 0.975 and 0.999.

The samples are generated in such a way that (n 1) of the observations come from

WC(0, ) and the remaining one observation comes from WC( , ) , where  is the degree of contamination and 0   1.

When n = 10, the contaminated point is placed at the fifth ordered position in the sample, whereas for the others, the contaminated point is set at the fifteenth ordered position in the sample.

The C,D,M and A statistics in each random sample are then calculated based on corresponding equations in Equations (2.2-2.5), respectively. Furthermore, the values of power performances such as P1, P3 and P5 are obtained as described above.

The comprehensive values of the measures of performances are listed in Appendix

A.5, a part of the results of performance study is displayed via a set of Figures (3.5-3.7)

36

Figure 3.5 displays the performance measure P3 against the degree of contamination  using the 5% percentile for all considered statistics C, D,M and A when n  50 , and it shows that the performance for all statistics increases when we increase the value of concentration parameter  or increase the contamination level  . In general, the C statistic outperforms the other statistics.

P3 1 P3 1 ρ = 0.4 ρ = 0.4 0.8 0.8 ρ = 0.7 ρ = 0.8 0.6 ρ = 0.9 0.6 ρ = 0.9 ρ = 0.975 0.4 ρ = 0.975 0.4 0.2 0.2 0 λ 0 λ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (a) (b) C statistic D statistic

P3 1 P3 1 ρ = 0.4 ρ = 0.4 0.8 0.8 ρ = 0.8 ρ = 0.8 0.6 ρ = 0.9 0.6 ρ = 0.9 ρ = 0.975 ρ = 0.975 0.4 0.4 0.2 0.2

0 λ 0 λ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (c) (d) M statistic A statistic Figure 3.5: power of performance for all statistics when n  50

Figure 3.6 presents the performance measure P3 against the degree of contamination using the 5 percentage points for all considered statistics and for   0.9.

Plots in Figure 3.6 reflect an inverse relationship between the power of performance and the sample size for statistics D,M and A, but for C statistic there is a direct relationship between the power of performance and the sample size n .

37

1 P3 P3 n=70 1 n=70 0.8 n=100 0.8 n=150 n=100 0.6 0.6 n=150 0.4 0.4 0.2 0.2 0 λ 0 λ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (a) (b) C statistic D statistic

P3 0.1 P3 0.4 n=70 n=70 0.08 0.3 n=100 n=100 0.06 n=150 0.2 n=150 0.04 0.1 0.02 0 λ 0 λ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (c) (d) M statistic A statistic Figure 3.6: power of performance for all statistics when   0.9

For the purpose of comparison between the performances of the four considered statistics, Figure 3.7 presents four selected graphs of the performance measures P1 against  for C,D,M and A statistics. The following results are observed:

i. Figures 3.7(a) and (b) show that for moderate concentration parameter   0.6 and

small sample size n 10 or large sample size n 150 , the values of P1 are low (less

than 0.1) and almost similar for all statistics C,M, D and A at any contamination 

level  . The weak of performance measures P1 for small concentration parameter

  0.6 is attributed to heavily tails of the wrapped Cauchy distribution, similar trends

are observed for P3 and P5.

38

ii. Figures 3.7(c) and (d) show that for large concentration parameter   0.95 and small

sample size n 10 or moderate sample size n  50 , the values of P1 for C and A

performs better than other statistics for large contamination levels (  0.6) , while M

statistic is better for small contamination level (  0.6) . Furthermore, we observe

that both C and A statistics are almost similar for small sample size (n 10) as shown

in Figure 3.7(c), but for moderate sample size (n  50) , C statistic performs better

than A statistic as shown in Figure 3.7(d).

P1 P1 1 1 C 0.9 C 0.8 D 0.8 D 0.7 0.6 M 0.6 M 0.5 A A 0.4 0.4 0.3 0.2 0.2 0.1 λ λ 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (a) (b)

n 10,   0.6 n 150,   0.6

P1 P1 1 1 C C 0.8 0.8 D D 0.6 M 0.6 M A 0.4 A 0.4

0.2 0.2 λ 0 0 λ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (c) (d) n 10,   0.95 n  50,   0.95

Figure 3.7: Relative performance of discordancy tests

39

iii. The difference between P1 and P3 generally are very closes to 0 for all cases except for

some cases as illustrated in Figure 3.7(e) and (f).

0.1 0.1 C C 0.08 0.08

D D

3

3 0.06 0.06 M P

P M

- -

1 1 P P 0.04 A 0.04 A

0.02 0.02

0 λ 0 λ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (e) (f) n  30,   0.6 n 150,   0.975

Figure 3.8: The difference between P1 and P3 in some cases

3.4 Summary

Based on the results obtained from this simulation study, we may conclude that the behviour of the cut-off points for the considered statistics highly depend on their formulations. Furthermore, for small sample size (n 10) or very low concentration parameters (  0.2) the calcualted cut-off points are meaningless.

The power of performance is an increasing function of the level of contamination and concentration parameters. While the power of performance is a decreasing function of the sample size for D,M and A statistics, but for C statistic there is an increasing function between the power of performance and the sample size .

Based on the measures of performance both C and A statistics perform compatibility better than other two statistics.

41

The findings are agreed with the conclusion by Abuzaid, et al (2009) for the von Mises distribution and Rambli, et al (2012) for the wrapped normal distribution. Thus, the researcher recommended to use different tests of discordancy to ensure the rejection of any suspected outlier.

41

CHAPTER FOUR

APPLICATION TO REAL DATA SETS

4.1 Introduction

This chapter introduces and analyzes of two real data sets to illustrate the steps of applying the four considered discordancy tests. The first real data is a univariate circular sample considered by several authors to fit it to a circular distribution. The other data set is a bivariate data were modeled by the circular-circular regression model (Kato et.al., 2008) and the obtained circular error is analyzed and tested via the tests of discordancy.

4.2 Ants' Direction Data

The first data set considered in this chapter is a part of data collected from an experiment was originally conducted by Jander (1957), where the directions chosen by ants toward a black target when released in a round arena are considered. Fisher (1993) randomly selected 100 values.

The mentioned data attracted the attention of several authors and they tried to find the best fit model. Fisher (1993) tested whether this data can be modeled as a random sample from a von Mises distribution, and he concluded that the von Mises distribution is not a suitable model for the data. Based on a large sample theory and through hypothesis testing,

Sengupta and Pal (2001) found that the wrapped normal distribution does not fit the data very well. Recently, Ravindran and Ghosh (2012) concluded that the wrapped Cauchy distribution is the best distribution for this data based on Bayesian analysis for wrapped distributions.

42

4.2.1 Data description

The ants' direction (in degrees) data consist of 100 observations as given below:

330 290 60 200 200 180 280 220 190 180 180 160 280 180 170 190 180 140 150 150 160 200 190 250 180 30 200 180 200 350 200 180 120 200 210 130 30 210 200 230 180 160 210 190 180 230 50 150 210 180 190 210 220 200 60 260 110 180 220 170 10 220 180 210 170 90 160 180 170 200 160 180 120 150 300 190 220 160 70 190 110 270 180 200 180 140 360 150 160 170 140 40 300 80 210 200 170 200 210 190

Figure 4.1 presents the data on the circle, where the distribution of observation suggests its uni-modality.

Figure 4.1: Circular plot of the ants' direction data

43

Table 4.1 gives some of descriptive statistics of the ants' direction data. The estimates of location parameters, namely circular mean and median are 183 and 180 , respectively.

Which are close to each other and that reflect the symmetry of the data distribution. Four other measures of dispersion inform that the data are moderately concentrated, where the estimates of mean resultant length and concentration parameter are 0.61 and 0.65 respectively.

Table 4.1: Descriptive statistics of the ants' direction data

Descriptive statistic Value Mean direction,  183 ~ Median, 

Mean resultant length, R 0.61  Concentration parameter,  0.65

Variance, V 0.39

Standard deviation, v 56.96

4.2.2 Identification of outliers

In order to ensure the accuracy of parameters estimates, in this subsection we survey on any possible outlier. For our tests of discordancy, namely C, D, M and A statistics are  conducted. Based on the estimate of concentration parameter   0.65 and the size of sample, each statistic will be used separately to test the following hypotheses:

Null hypotheses, H0 : the observation is not outlier.

Alternative hypotheses, H1 : the observation is an outlier.

44

If the actual value of the statistic is larger than the cut-off point, then we reject the null hypotheses so that the rth observation is identified as an outlier.

Table 4.2 gives the actual values of each test statistics, the corresponding cut-off points for n 100,   0.65 and   0.05, attached with the decision. Results in Table 4.2 show that C , M and A tests identified the observation with value 360 as a candidate to be an outlier, while D test identified observation with value 330 . The contrast between the results of D test and other tests is referred to their nature, where the observation 330 is located on the longest arc between 300 and 350 while observation value 360 is the extreme value from the preferred direction.

None of the tests values is exceeded the associated cut-off points, thus we may conclude that the ant's direction data is free of any outliers.

Table 4.2: Results of discordancy tests on ants' direction data

Test Observation Actual value Cut-off point Decision C 0.026 0.028 Not an outlier

D 330 0.667 0.92 Not an outlier

M 360 0.051 0.073 Not an outlier

A 360 0.812 0.868 Not an outlier

4.3 Wind Data

The wind data consist of the wind direction at 6 a.m. and 12 noon were measured each day at the weather station in Milwaukee for 21 consecutive days, as firstly analyzed in

Johnson and Wehrly (1977, Table 2) and are given below:(data in degree)

45

Day 1 2 3 4 5 6 7 6 a.m. 356 97.2 211 232 343 292 157 12 noon 119 162 221 259 270 28.8 97.2

Day 8 9 10 11 12 13 14 6 a.m. 302 335 302 324 84.6 324 340 12 noon 292 39.6 313 94.2 45 47 108

Day 15 16 17 18 19 20 21 6 a.m. 157 238 254 146 232 122 329 12 noon 221 270 119 248 270 45 23.4

Kato, et al. (2008) proposed a circular-circular regression model, where its curve is expressed as a form of the Mobius circle transformation. For a circular random covariate X and a circular response variable Y, they proposed the regression curve to be

x  1 y  0 , (4.1) 1 1x

where β0 and β1 are complex parameters with β0 Ω, β1 C and ε follow wrapped Cauchy distribution.

As an example, Kato, et al. (2008) used their model for regressing this data at 12 noon

1.27i on that at 6 a.m. The maximum likelihood estimates of the parameters are 0  e and

2.59i 1  0.53e . Kato et al. (2008) presented the circular distance for each predicted value as given in Figure 4.2. Five suspected outliers are numbered in the figure without using any tests, and they concluded that "Apart from five outliers, model (4.1) seems to provide, a satisfactory fit to the data".

46

Figure 4.2: Circular distance between the observed and predicted values.

In the following subsection, the circular error is obtained and described in order to identify any possible outliers.

4.3.1 Data description

The circular error that obtained from the circular regression model is consisted of 21 observations measured in radians and given below:

Day 1 2 3 4 5 6 7 Error 0.131 0.031 6.205 0.056 3.442 0.031 4.412

Day 8 9 10 11 12 13 14 Error 6.342 6.162 5.99 0.545 3.912 0.199 0.982

Day 15 16 17 18 19 20 21 Error 0.884 0.081 3.742 0.545 0.248 4.292 6.236

47

Figure 4.3: Circular plot of circular error of the wind data

Figure 4.3 shows the circular plot of circular error of wind data, and some of the summary statistics are listed in Table 4.3. The circular mean of circular error is very close to zero, and the estimates of the mean resultant length and concentration parameter are 0.552 and 0.773 respectively.

Table 4.3: Descriptive statistics of circular error of the wind data

Descriptive statistic Measure Mean direction,  -0.04 ~ Median,  0.031

Mean resultant length, R 0.552  Concentration parameter,  0.773

Variance, V 0.448

Standard deviation, v 1.09

48

4.3.2 Detection of outliers

Kato et al. (2008) considered observations numbers 5,7,12,17 and 20 as outliers depending on the plot of circular distance in Figure 4.2, which is the minimum circular distance between the observed and predicted value.

In this subsection, we implement four discordancy tests C,D,M and A to test whether the suspected five observations are outliers or not.

Table 4.4 presents the actual values of the discordancy test statistics, their  corresponding cut-off point and the decision, for n  21,   0.77 and   0.05 .

Results show that in the first iteration, C statistic was able to detect the fifth observation with value 3.44 as an outlier, while other tests failed to identify any point as outlier.

In order to detect any other outliers, the fifth observation is excluded and the descriptive statistics are re-estimated, the circular mean of circular error is -0.015 and the estimate of the mean resultant length and concentration parameter are 0.62 and 0.8 respectively. Then, the four tests of discordancy are obtained as given in the second iteration  in Table 4.4, for n  20,  0.8 at 0.05 level of significance. The four listed tests of discordancy agreed to identify observation number 17 in the full data as a suspected outlying observation but none of them identified it as an outlier where the tests values are less than the corresponding cut-off points.

49

Table 4.4: Results of discordancy tests on wind data

Iteration Test Observation value Actual value Cut-off point Decision C 5 0.14 0.13 An outlier

D 5 0.12 0.90 Not an outlier I M 5 0.21 0.57 Not an outlier

A 5 0.80 0.93 Not an outlier

C 17 0.12 0.13 Not an outlier

II D 17 0.06 0.89 Not an outlier

M 17 0.24 0.65 Not an outlier

A 17 0.80 0.94 Not an outlier

4.4 Discussion

Two real data sets were described and analyzed in this chapter to illustrate the steps of applying the discordancy tests in real applications.

In the Ants' direction data, all the discordancy tests do not identify any observation as an outlier. On the other hand, in the wind direction data, only the C statistic identified one observation as an outlier in the first iteration, while the other tests failed to do so. In the second iteration the fifth observation is excluded and all the tests applied again, but all the tests failed to identify any outlying observation as an outlier.

Therefore, it is recommended using different tests of discordacny to detect outliers in circular data.

51

CHAPTER FIVE

CONCLUSIONS

5.1 Summary

In this study we have reviewed one of the most important distributions of the circular random variables, namely the wrapped Cauchy distribution, and we have highlighted the effects of outliers on the analysis of circular data. Four tests of discordancy C,D,M and A were extended for the wrapped Cauchy distribution, the cut-off points and the power of performance were investigated via extensive simulation study. Moreover, the tests were applied on ants' data set and wind direction data set.

5.2 Main conclusions

With the completion of this thesis, three main aspects were discussed:

 Extension C,D,M and A tests of discordancy for wrapped Cauchy distribution and

obtain the cut-off points.

 Investigating the performance of discordancy tests in the wrapped Cauchy distribution

and comparing the four tests.

 Illustrating the tests of discordancy on two real data sets following wrapped Cauchy

distribution.

Firstly, the cut-off points for tests based C,D,M and A statistics for the wrapped

Cauchy distribution are obtained via simulation study, we concluded that for small sample size (n 10) or very low concentration parameters (  0.2) the calcualted cut-off points

51

are meaningless. Morover, there is an inverse relationship between the cut-off points and the level of percentiles.

Secondly, the power of performance of discordancy tests in the wrapped Cauchy distribution are investigated. We concluded that the power of performance is an increasing function of the level of contamination and concentration parameter. While the power of performance is a decreasing function of the sample size for statistics D,M and A, but for C statistic there is increasing function between the performance and the sample size, also by comparing the considerd tests discern that both C and A statistics perform compatibility better than other two statistics.

Finally, the considerd tests are illustrated on two sets of real data to identify the existence of outliers, on the ants' direction data results show that the four considered tests do not identify any outlying observation as an outlier, but on the wind data, the C statistic was able to detect the fifth observation as an outlier, while other tests failed to identify any point as an outlier. Therefore, it is recommend using different tests of discordacny to detect outliers in circular data.

5.3 Further researches.

There are many areas where the work in this thesis can be expanded upon. Some suggestions are given as follows:

 To extend the procedures of outliers detection to other circular distributions.

 To develop other effective tests of discordancy for small samples or samples with

low concentration parameter.

 To study the problem of outliers in circular regression models and functional

relationship models with error from wrapped Cauchy distribution.

52

REFERENCES

Abuzaid, A. H. (2012). Analysis of mother's day celebration via circular statistics. The Philippine statistician. 61 (2) 39-52. Abuzaid, A. H. (2013). On the influential points in the functional circular relationship models with an application on wind data. Pakistan Journal of Statistics and Operation Research, 9 (3), 333-342. Abuzaid, A. H., Hussin, A. G. and Mohamed, I. B. (2008). Identifying single outlier in linear circular regression model based on circular distance. Journal of Applied Probability and Statistics. 3 (1), 107-117. Abuzaid, A. H., Hussin, A. G. and Mohamed, I. B. (2013). Detection of outliers in simple circular regression models using the mean circular error statistic. Journal of Statistical Computation and Simulation. 83 (2), 269-277. Abuzaid, A. H., Mohamed, I. B. and Hussin, A. G. (2012). Boxplot for Circular Variables. Computational Statistics. 27 (3), 381-392. Abuzaid, A. H., Mohamed, I. B. and Hussin, A. G. (2014). Procedures for outlier detection in circular time series models. Environmental and Ecological Statisics, 1-17. Abuzaid, A. H., Mohamed, I. B., Hussin, A. G. and Rambli, A. (2011). COVRATIO statistic for simple circular regression model. Chiang Mai J. Sci. 38 (3), 321-330. Abuzaid, A. H., Rambli, A. and Hussin, A.G. (2012). Statistics for a New Test of Discordance in Circular Data. Communications in Statistics - Simulation and Computation. 41 (10), 1882-1890. Abuzaid, A. M., Mohamed, I. B. and Hussin, A. G. (2009). A new test of discordancy in circular data. Communications in statistics-Simulation and Computaion. 38(4), 682-691. Bagchi, P. and Guttman, I. (1990). Spuriosity and outliers in directional data. Journal of Applied Statistics, 17, 341-350. Barnett, V. and Lewis, T. (1984). Outliers in Statistical Data. 2nd ed., John Wiley & Sons, Chichesters. Bartumeus, F., da Luz, M. G E., Viswanathan, G. M. and Catalan, J. (2005). Animal search strategies: a quantitative random-walk analysis. Ecology 86, 3078–3087. Batschelet, E. (1981). Circular Statistics in Biology, Academic Press, London. Bernoulli, D. (1777). The most probable choice between several discrepant observations and the formation therefrom of the most likely induction, in Allen, C. G. (1961). Biometika, 48 (1), 3-18. Best, D. J. and Fisher, N. I. (1981). The bias of the maximum likelihood estimators of the von Mises-Fisher concentration parameters. Communication in Statistics – Simulations and Computations. B10 (5), 394-502.

53

Blake, A. and Marinos, C. (1990). Shape from texture estimation, isotropy and moments. Artificial Intelligence, 45, 323-380. Chambers, J.M., Cleveland, W.S., Kleiner, B. and Tukey, P.A. (1983). Graphical Methods for Data Analysis. Wadsworth, Boston, MA. Chang-Chien, S., Hung, W. and Yang, M. (2012). On mean shift-based clustering for circular data. Springer, 16 (6),1043-1060. Collett, D. (1980). Outliers in circular data. Applied Statistics, 29 (1), 50-57.

Crawley, M. J. (2012).The R Book, 2nd Edition. Wiley, London.

Dan, E.D. and ijeoma, O.A. (2000). Statistical analysis/methods of detecting outliers in a bivariate data in a regression analysis model. International Journal of Education and Research. 1(4). David, H. A. (1970). Order Statistics. Wiley, New York and London. Dixon, W. J. (1950). Analysis of extreme values. The Annals of Mathematical Statistics, 21 (4), 488-506. Dixon, W. J. (1951). Ratios involving extreme values. The Annals of Mathematical Statistics, 22 (1), 68-78. Ferguson, D., Landreth, H. and Mckeown, J. (1967). Sun compass orientation of the northern cricket frog, Acris crepitans. Animal Behaviour, 15 (1), 45-53. Fisher, N. I. (1993). Statistical Analysis of Circular Data. Cambridge University Press, London. Fisher, N. I. and Powell, C. McA. (1989). Statistical analysis of two-dimensional palaeocurrent data: Methods and examples. Aust. J. Earth Sci. 36, 91-107. Gatto, R. and Jammalamadaka, S. R. (2007). The generalized von Mises distribution. Statistical Methodology, 4, 341-353. Gill, J. and Hangartner, D. (2010). Circular data in political science and how to handle it. Political Analysis, 18 (3), 316-336. Goldberg, K. M. and Iglewicz, B. (1992). Bivariate extensions of the boxplot. Technometrics, 34, 307-320. Gordon, A. D., Jupp, P. E. and Byrne, R. W. (1989). Construction and assessment of mental maps. British Journal of Mathematical and Statistical Psychology, 42,169-182. Gurarie, E. (2008). Models and analysis of animal movements: From individual tracks to mass dispersal. University of Washington. Seattle, WA. From http://wiki.cbr.washington.edu/qerm/sites/qerm/images/9/9f/GurarieDissertationFinalDraf t.pdf Hawkins, D. M. (1980). Identification of Outliers. Chapman and Hall, London – New York, 29(2),198-1987.

54

He, Z., Deng, S. and Xu, X. (2002), Outlier Detection Integrating Semantic Knowledge. Advances in Web-Age Information Management, Third International Conference, WAIM 2002 Beijing, China, August 11–13, 2002 Proceedings, pp 126-131. Hussin, A. G., Fieller, N. R. J. and Stillman, E. C. (2004). Linear regression for circular variables with application to directional data. Journal of Applied Science & Technology, 9, (1 & 2), 1-6. Hussin, A.G. and Abuzaid, A. (2012.) Detection of outliers in functional relationship model for circular variables via complex form. Pak. J. Statist, 28 (2), 205-216. Hussin, A.G., Abuzaid, A., Zulkifili, F. and Mohamed, I. (2010). Asymptotic covariance ad detection of influential observations in a linear functional relationship model for circular data with application to the measurements of wind directions. Science Asia, 36, 249-253. Ingelfinger, J. A., Mosteller, F., Thibodeau, L. A., and Ware, J. H. (1983). Biostatistics in Clinical Medicine. Macmillan, New York. Jammalamadaka, S. R., Bhadra, N., Chaturvedi, D., Kutty, T. K., Majumdar, P. P. and Poduval, G. (1986). Functional assessment of knee and ankle during level walking. In Matusita, K., editor. Data Analysis in Life Science, 21-54. Indian Statistical Institute, Calcutta, India. Jander, R. (1957). Die optische Richtangsorientierung der roten Waldameise (Formica rufa. L.). Z. vergl. Physiologie, 40, 162-238. Jarrell, M.G. (1994). A Comparison of two procedures, the Mabalanobis Distiance and the Andrews-Pregibon Statistics, for identifying Multivariate Outliers. Researches in the Schools, 1, 49-58. Johnson, R. A. and Wehrly, T. E. (1977). Measures and models for angular correlation and angular- linear correlation. Journal of the Royal Statistical Society, Series B, 39, 222-229. Jupp, P. E. and Mardia, K. V. (1989). A unified view of the theory of directional statistics, 1975-1988. Internat. Statist. Rev., 57, 261-294. Jupp., P. E. (1995). Some applications of directional statistics to astronomy. In E. M. Tiit, T. Kollo & H. Niemi (eds), New Trends in Probability and Statistics. Vol.3. Multivariate Statistics and Matrices in Statistacs, 123-133, VSP, Utrecht. Kato, S. and Jones, M. C. (2013). An extended family of circular distributions related to wrapped Cauchy distributions via Brownian motion. Bernoulli, 19 (1), 154-171. Kato, S., Shimizu, K. and Shieh, G. (2008). A circular-circular regression model. Statistica Sinica, 18 (2), 633-643. Kent, J. T. and Tyler, D. E. (1988). Maximum Likelihood estimation for the wrapped Cauchy distribution. J. Appl. Statist., 15, 247-254. Lenth, R. V. (1981). On finding the source of a signal. Technometrics, 23 (2), 149-154. Levy, P. (1939). L'addition des variables aleatoires definies sur une circonference. Bull. Soc. Math. France, 67, 1-41.

55

Lopuhaa, H. P. (1999). A symptotics of reweighted estimators of multivariate location and scatter Ann. Statistics, 27, 65-1638. Lund, U. (1999). Least circular distance regression for directional data. Journal of Applied Statistics, 26 (6), 723-733. Mardia, K. V. (1972). Statistics of Directional Data. Academic Press, London. Mardia, K. V. (1975). Statistics of directional data. Journal of the Royal Statistical Society, Series B, 37, 349-393. Mardia, K. V. and Jupp, P. E. (2000). Directional Statistics. John Wiley & Sons, London. Mardia, K. V., Kent, J. T., Goodall, C. R. & Little, J. A. (1996). Kriging and splines with derivative information. Biometrika, 83, 207-221. McCullagh, P. (1996). Mobius transformation and Cauchy parameter estimation. Ann. Statist., 24, 787-808. Moore, D. and McCabe, G. (1999). Introduction to the Practice of Statistics, 3rd ed. W. H. Freeman and Company. New York. Murdoch, J. and Barnes, J. A. (1998). Statistical Tables for Students of Science, Psychology, Engineering, Business, Management and Finance. 4th ed., Palgrave Macmillan. Pearson, E. S. and Hartley, H. O. (1966). Biometrika Tables for Statisticians.Vol.1, 3rd ed., Cambridge University Press, London. Peirce, B. (1852). Criterion for the rejection of doubtful observations. Astronomical Journal, 2, 161-163. Polya, G. (1919). Zur Statktik der spharischen Verteilung der Fixsterne. Ast. Nachr., 208, 175-180 Rambli, A., Mohamed, I., Hussin, A.G. and Ibrahim, S. (2012). On discordance Test for the Wrapped Normal Data, Sains Malaysiana, 41 (6), 769-778. Ravindran, P. and Ghosh, S. K. (2012). Bayesian Analysis of Circular Data Using Wrapped Distributions. Journal of Statistical Theory and Practice, 5 (4), 547-561. Rayleigh, L. (1919). On the problem of random vibration, and of random flights of in one, two, or three dimensions. Phil. Mag., 37 (6), 321-347. Rivest, L.-P. (1997). A decentred predictor for circular –circular regression. Biometrika, 84 (3), 717-726. Ross, H. E. Crickmar, S. D., Sills, N. V. & Owen, E. P. (1969). Orientation to the vertical in free divers. Aerospace Med., 40, 728-732. Rousseeuw, J. P., Van Zomeren, B. C. (1990). Unmasking multivariate outliers and leverage points. Journal of the American Statistical Association, 85 (411), 633-651. Rousseeuw, P. and Leroy, A. (1987). Robust Regression and Outlier Detection. Journal of Educational Statistics, 13 (4), 358-364.

56

Schmidt-Koenig, K. (1965). Current problems in bird orientation. In D. Lehrman et al. (eds) Advances in the Study of Behaviour, 217-278, Academic Press, New York. Sengupta, T.K. Pal and D. Chakraborty (2001). Interpretation of inequality constraints involving interval coefficients and a solution to interval linear programming, Fuzzy Sets and System, 119 (1), 129-138. Siew, H. Y., Kato, S. and Shimizu, K. (2008). The generalized t-distribution on the circle. Japanese Journal of Applied Statistics, 37 (1), 1-16. Sim, C. H., Gan, F. F. and Chang, T. C. (2005). Outlier labeling with boxplot procedures. Journal of the American Statistical Association, 100 (470), 642-652. Szepietowski, R. M., Bacon, D. J., Dietrich, J. P., Busha, M., Wechsler, R., and Melchior, P. (2013). Density mapping with weak lensing and phase information. Oxford Journal, Advance Access published April 2, 2014, doi: 10.1093/mnras/stu380. Tebbs, J. M. (2006). Introduction to descriptive statistics. Department of Statistics, the University of South Carolina. Tukey, J. W. (1977). Exploratory Data Analysis. Addison-Wesley, Reading, MA. Vo, A. and Oriantara, S. (2010). A study of relative phase in complex wavelet domain: Property, statistics and applications in texture image retrieval and segmentation. Image Communication, 25 (1), 28-46. Von Mises, R. (1918). Uber die “Ganzzahligkeit” der atomgewichte und vewandte fragen. Physikal. Z. , 19,490-500. Watson, G. S. (1970). Orientation statistics in the earth sciences. Bull. geol. Instn Univ. Uppsala N.S. 2 (9), 73-89. Watson, G. S. and Williams, E. J. (1956). On the construction of significance tests on the circle and the sphere. Biometrika, 43, 344-352. Wintner, A. (1947). On the shape of the angular case of Cauchy’s distribution curves. Ann. Math. Statist., 18, 589-5693. Wright, T. W. (1884). A Treatise on the Adjustment of Observations with Applications to geodetic work and other measures of precision. Van Nostrand, New York. Wu, J. W. and Lee, W. (2006). Computational algorithm of least absolute deviation method for determining number of outliers under normality. Applied Mathematics and Computation, 175, 609-617. Yackulic, C. B., Blake, S., Deem, S., Kock, M. and Uriarte, M. (2011). One size does not fit all: flexible models are required to understand animal movement across scales. Journal of Animal Ecology, 80 (5), 96-1088.

57

APPENDIX A.1 The Cut-off points for the tests of discordancy Appendix A.1.1: Cut-off points for the test based on the C statistic  n Level of percentile 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.975 0.999 10% 1.65 1.57 1.50 1.32 1.02 0.84 0.69 0.58 0.36 0.15 0.06 0.00 5 5% 2.21 2.14 2.06 1.74 1.29 1.08 0.84 0.67 0.57 0.36 0.20 0.00 1% 5.38 5.72 4.15 4.03 2.36 1.84 1.56 0.93 0.68 0.65 0.61 0.01 10% 1.02 0.93 0.74 0.58 0.43 0.36 0.31 0.26 0.23 0.17 0.08 0.00 10 5% 1.41 1.32 0.95 0.72 0.52 0.42 0.35 0.29 0.25 0.22 0.15 0.00 1% 2.98 2.85 2.05 1.56 1.01 0.62 0.46 0.36 0.28 0.25 0.24 0.03 10% 0.86 0.68 0.50 0.34 0.28 0.23 0.19 0.17 0.15 0.13 0.08 0.00 15 5% 1.19 0.94 0.69 0.49 0.35 0.25 0.21 0.18 0.16 0.15 0.13 0.00 1% 2.52 1.75 1.36 0.81 0.54 0.33 0.26 0.20 0.17 0.16 0.15 0.04 10% 0.68 0.51 0.37 0.25 0.20 0.17 0.14 0.12 0.11 0.10 0.07 0.00 20 5% 0.98 0.69 0.50 0.33 0.23 0.18 0.15 0.13 0.12 0.11 0.10 0.00 1% 1.91 1.59 0.95 0.49 0.36 0.24 0.17 0.15 0.12 0.12 0.11 0.04 10% 0.59 0.46 0.27 0.21 0.15 0.13 0.11 0.10 0.09 0.08 0.07 0.00 25 5% 0.82 0.61 0.33 0.24 0.17 0.14 0.12 0.10 0.09 0.09 0.08 0.00 1% 1.81 1.01 0.59 0.41 0.26 0.16 0.13 0.11 0.10 0.09 0.09 0.04 10% 0.52 0.36 0.23 0.16 0.13 0.11 0.09 0.08 0.07 0.07 0.06 0.00 30 5% 0.77 0.53 0.29 0.19 0.14 0.11 0.10 0.08 0.08 0.07 0.07 0.00 1% 1.87 0.85 0.53 0.27 0.17 0.13 0.11 0.09 0.08 0.07 0.07 0.03 10% 0.45 0.27 0.19 0.13 0.10 0.09 0.08 0.07 0.06 0.06 0.05 0.00 35 5% 0.62 0.35 0.22 0.15 0.12 0.09 0.08 0.07 0.06 0.06 0.06 0.00 1% 1.31 0.70 0.40 0.20 0.15 0.11 0.09 0.07 0.07 0.06 0.06 0.04 10% 0.42 0.26 0.16 0.12 0.09 0.08 0.07 0.06 0.05 0.05 0.05 0.00 40 5% 0.64 0.35 0.19 0.13 0.10 0.08 0.07 0.06 0.06 0.05 0.05 0.00 1% 1.24 0.84 0.31 0.18 0.12 0.09 0.08 0.06 0.06 0.06 0.05 0.04 10% 0.42 0.23 0.14 0.10 0.08 0.07 0.06 0.05 0.05 0.05 0.04 0.00 45 5% 0.60 0.31 0.17 0.11 0.09 0.07 0.06 0.05 0.05 0.05 0.05 0.00 1% 1.35 0.70 0.25 0.15 0.10 0.08 0.06 0.06 0.05 0.05 0.05 0.03 10% 0.35 0.20 0.12 0.09 0.07 0.06 0.05 0.05 0.04 0.04 0.04 0.00 50 5% 0.53 0.26 0.15 0.10 0.08 0.06 0.05 0.05 0.04 0.04 0.04 0.00 1% 1.10 0.58 0.19 0.12 0.09 0.07 0.06 0.05 0.05 0.04 0.04 0.02 10% 0.33 0.17 0.10 0.07 0.06 0.05 0.04 0.04 0.04 0.03 0.03 0.00 60 5% 0.42 0.21 0.12 0.08 0.06 0.05 0.04 0.04 0.04 0.04 0.03 0.01 1% 0.87 0.43 0.17 0.10 0.07 0.05 0.05 0.04 0.04 0.04 0.04 0.03 10% 0.29 0.14 0.08 0.06 0.05 0.04 0.04 0.03 0.03 0.03 0.03 0.00 70 5% 0.42 0.17 0.09 0.06 0.05 0.04 0.04 0.03 0.03 0.03 0.03 0.00 1% 0.92 0.31 0.12 0.07 0.06 0.05 0.04 0.04 0.03 0.03 0.03 0.02 10% 0.24 0.12 0.07 0.05 0.04 0.04 0.03 0.03 0.03 0.03 0.03 0.00 80 5% 0.35 0.14 0.08 0.06 0.05 0.04 0.03 0.03 0.03 0.03 0.03 0.00 1% 0.64 0.24 0.11 0.07 0.05 0.04 0.04 0.03 0.03 0.03 0.03 0.02 10% 0.22 0.10 0.06 0.05 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.00 90 5% 0.32 0.12 0.07 0.05 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.00 1% 0.80 0.19 0.09 0.06 0.04 0.04 0.03 0.03 0.02 0.02 0.02 0.02 10% 0.19 0.09 0.05 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.00 100 5% 0.25 0.11 0.06 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.00 1% 0.47 0.19 0.08 0.05 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 10% 0.18 0.08 0.05 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.00 110 5% 0.24 0.09 0.05 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 1% 0.51 0.16 0.07 0.05 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 10% 0.16 0.07 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.00 120 5% 0.22 0.08 0.05 0.04 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.01 1% 0.39 0.13 0.06 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 10% 0.15 0.07 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.00 130 5% 0.21 0.08 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.01 1% 0.51 0.10 0.05 0.04 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.01 10% 0.13 0.06 0.04 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.00 140 5% 0.17 0.07 0.04 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.01 1% 0.36 0.10 0.05 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01 10% 0.13 0.06 0.03 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.00 150 5% 0.17 0.06 0.04 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01 1% 0.37 0.09 0.04 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01

58

Appendix A.1.2: Cut-off points for the test based on the D statistic  n Level of percentile 0.1 0.2 0.3 0.4 0.5 0.6 0.7   0.7* 0.8 0.9 0.95 0.975 0.999 10% 0.87 0.85 0.85 0.83 0.82 0.76 0.72 0.81 (0.05) 0.62 0.44 0.24 0.12 0.00 5 5% 0.94 0.92 0.93 0.91 0.90 0.88 0.85 0.90 (0.03) 0.79 0.67 0.44 0.27 0.01 1% 0.98 0.98 0.98 0.98 0.98 0.97 0.97 0.98 (0.01) 0.95 0.92 0.85 0.74 0.04 10% 0.87 0.86 0.85 0.85 0.84 0.81 0.78 0.84 (0.03) 0.74 0.59 0.40 0.22 0.01 10 5% 0.93 0.92 0.92 0.91 0.91 0.90 0.88 0.91 (0.02) 0.86 0.78 0.63 0.42 0.02 1% 0.99 0.98 0.99 0.99 0.98 0.98 0.98 0.98 (0.00) 0.97 0.95 0.91 0.81 0.11 10% 0.87 0.87 0.86 0.86 0.85 0.84 0.82 0.85 (0.02) 0.78 0.68 0.53 0.33 0.01 15 5% 0.93 0.93 0.93 0.92 0.93 0.91 0.90 0.92 (0.01) 0.88 0.82 0.71 0.55 0.03 1% 0.99 0.99 0.99 0.98 0.98 0.99 0.98 0.99 (0.00) 0.97 0.96 0.93 0.89 0.15 10% 0.87 0.86 0.86 0.87 0.85 0.85 0.84 0.86 (0.01) 0.80 0.72 0.58 0.38 0.02 20 5% 0.93 0.93 0.93 0.93 0.92 0.92 0.92 0.93 (0.01) 0.89 0.84 0.76 0.63 0.04 1% 0.98 0.99 0.99 0.99 0.98 0.99 0.98 0.98 (0.00) 0.98 0.97 0.95 0.90 0.22 10% 0.86 0.87 0.86 0.85 0.86 0.86 0.84 0.86 (0.01) 0.83 0.74 0.62 0.45 0.02 25 5% 0.93 0.93 0.93 0.93 0.92 0.93 0.92 0.93 (0.00) 0.91 0.86 0.80 0.67 0.04 1% 0.99 0.98 0.98 0.98 0.98 0.99 0.99 0.98 (0.00) 0.98 0.97 0.95 0.92 0.21 10% 0.86 0.87 0.88 0.86 0.86 0.86 0.85 0.86 (0.01) 0.84 0.77 0.69 0.52 0.03 30 5% 0.92 0.93 0.94 0.93 0.93 0.93 0.92 0.93 (0.01) 0.92 0.88 0.83 0.72 0.05 1% 0.98 0.99 0.99 0.99 0.98 0.98 0.99 0.99 (0.00) 0.99 0.97 0.96 0.92 0.28 10% 0.87 0.87 0.86 0.86 0.86 0.85 0.86 0.86 (0.01) 0.84 0.78 0.70 0.57 0.03 35 5% 0.93 0.94 0.93 0.92 0.93 0.92 0.92 0.93 (0.01) 0.92 0.89 0.84 0.75 0.06 1% 0.99 0.99 0.99 0.98 0.98 0.98 0.98 0.98 (0.00) 0.99 0.98 0.97 0.93 0.34 10% 0.86 0.86 0.86 0.87 0.87 0.85 0.85 0.86 (0.01) 0.84 0.79 0.70 0.60 0.03 40 5% 0.93 0.93 0.93 0.94 0.93 0.92 0.92 0.93 (0.01) 0.92 0.89 0.84 0.77 0.07 1% 0.98 0.99 0.98 0.99 0.99 0.99 0.98 0.99 (0.00) 0.98 0.98 0.96 0.94 0.38 10% 0.87 0.87 0.86 0.87 0.86 0.87 0.86 0.86 (0.00) 0.85 0.81 0.73 0.64 0.04 45 5% 0.93 0.93 0.93 0.93 0.92 0.93 0.93 0.93 (0.00) 0.92 0.90 0.85 0.80 0.08 1% 0.99 0.98 0.98 0.99 0.98 0.99 0.99 0.99 (0.00) 0.98 0.98 0.97 0.96 0.48 10% 0.87 0.87 0.86 0.86 0.86 0.85 0.86 0.86 (0.01) 0.85 0.82 0.75 0.62 0.04 50 5% 0.94 0.93 0.93 0.93 0.92 0.93 0.93 0.93 (0.00) 0.92 0.90 0.88 0.80 0.09 1% 0.99 0.98 0.98 0.98 0.98 0.98 0.99 0.98 (0.00) 0.98 0.98 0.98 0.95 0.44 10% 0.87 0.86 0.86 0.86 0.87 0.86 0.85 0.86 (0.01) 0.85 0.83 0.78 0.65 0.05 60 5% 0.93 0.93 0.93 0.93 0.93 0.93 0.91 0.93 (0.01) 0.93 0.90 0.89 0.80 0.12 1% 0.99 0.99 0.98 0.99 0.99 0.98 0.99 0.99 (0.00) 0.99 0.98 0.98 0.96 0.51 10% 0.87 0.86 0.86 0.86 0.86 0.86 0.86 0.86 (0.00) 0.86 0.83 0.79 0.68 0.06 70 5% 0.93 0.92 0.93 0.93 0.93 0.93 0.93 0.93 (0.00) 0.93 0.91 0.89 0.81 0.15 1% 0.99 0.98 0.98 0.98 0.98 0.98 0.99 0.98 (0.00) 0.99 0.98 0.98 0.96 0.51 10% 0.86 0.87 0.87 0.86 0.86 0.86 0.86 0.86 (0.00) 0.86 0.84 0.80 0.71 0.07 80 5% 0.93 0.93 0.93 0.93 0.93 0.93 0.93 0.93 (0.00) 0.93 0.92 0.90 0.84 0.15 1% 0.98 0.99 0.99 0.98 0.99 0.98 0.98 0.98 (0.00) 0.99 0.98 0.98 0.96 0.62 10% 0.85 0.86 0.86 0.87 0.87 0.87 0.86 0.86 (0.01) 0.86 0.83 0.80 0.73 0.09 90 5% 0.93 0.93 0.93 0.93 0.93 0.93 0.92 0.93 (0.00) 0.93 0.91 0.89 0.87 0.19 1% 0.98 0.98 0.99 0.99 0.99 0.98 0.98 0.98 (0.00) 0.98 0.98 0.97 0.97 0.70 10% 0.86 0.87 0.87 0.86 0.86 0.85 0.86 0.86 (0.01) 0.86 0.84 0.82 0.76 0.09 100 5% 0.93 0.93 0.94 0.93 0.93 0.92 0.92 0.93 (0.01) 0.92 0.92 0.90 0.87 0.19 1% 0.98 0.99 0.99 0.99 0.99 0.98 0.99 0.99 (0.00) 0.98 0.98 0.98 0.97 0.65 10% 0.86 0.85 0.85 0.87 0.85 0.86 0.86 0.86 (0.01) 0.87 0.84 0.81 0.75 0.10 110 5% 0.93 0.92 0.92 0.93 0.93 0.93 0.92 0.92 (0.01) 0.93 0.92 0.89 0.86 0.24 1% 0.98 0.98 0.98 0.99 0.99 0.98 0.98 0.98 (0.00) 0.99 0.99 0.98 0.97 0.69 10% 0.86 0.87 0.87 0.85 0.86 0.86 0.87 0.86 (0.01) 0.87 0.85 0.82 0.76 0.12 120 5% 0.93 0.93 0.93 0.93 0.93 0.93 0.93 0.93 (0.00) 0.93 0.92 0.91 0.87 0.25 1% 0.99 0.99 0.98 0.98 0.99 0.98 0.99 0.99 (0.00) 0.99 0.99 0.98 0.98 0.66 10% 0.86 0.87 0.86 0.86 0.86 0.86 0.87 0.86 (0.00) 0.87 0.84 0.83 0.77 0.12 130 5% 0.92 0.93 0.93 0.93 0.92 0.93 0.93 0.93 (0.00) 0.93 0.92 0.91 0.88 0.27 1% 0.98 0.99 0.98 0.98 0.99 0.99 0.99 0.99 (0.00) 0.99 0.98 0.98 0.98 0.71 10% 0.86 0.85 0.86 0.87 0.86 0.87 0.87 0.86 (0.01) 0.86 0.85 0.83 0.78 0.13 140 5% 0.92 0.92 0.93 0.94 0.93 0.94 0.93 0.93 (0.01) 0.93 0.92 0.92 0.88 0.27 1% 0.98 0.98 0.99 0.99 0.99 0.99 0.98 0.99 (0.00) 0.99 0.98 0.98 0.98 0.73 10% 0.86 0.85 0.86 0.86 0.86 0.86 0.87 0.86 (0.00) 0.86 0.85 0.83 0.79 0.14 150 5% 0.93 0.92 0.93 0.93 0.92 0.92 0.93 0.93 (0.01) 0.93 0.92 0.91 0.88 0.31 1% 0.98 0.98 0.98 0.99 0.99 0.98 0.99 0.98 (0.00) 0.99 0.98 0.98 0.97 0.70 *The mean of the cut-off points for   0.7 at specific sample size and percentile. The value between parenthesis is the standard deviation of the cut-off points for at specific sample size and percentile.

59

Appendix A.1.3: Cut-off points for the test based on the M statistic  n Level of percentile 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.975 0.999 10% 0.76 0.79 0.82 0.83 0.86 0.89 0.93 0.96 0.97 0.99 0.99 0.99 5 5% 0.83 0.86 0.87 0.87 0.92 0.94 0.96 0.97 0.99 0.99 1.00 1.00 1% 0.93 0.93 0.96 0.96 0.98 0.98 0.99 0.99 1.00 1.00 1.00 1.00 10% 0.36 0.38 0.41 0.48 0.53 0.63 0.70 0.81 0.91 0.96 0.98 0.99 10 5% 0.40 0.43 0.46 0.53 0.60 0.69 0.76 0.86 0.94 0.98 0.99 1.00 1% 0.49 0.53 0.54 0.66 0.74 0.77 0.86 0.93 0.97 0.99 1.00 1.00 10% 0.22 0.23 0.26 0.31 0.37 0.44 0.55 0.68 0.84 0.93 0.97 0.98 15 5% 0.23 0.26 0.29 0.35 0.41 0.50 0.61 0.74 0.89 0.96 0.98 1.00 1% 0.27 0.31 0.34 0.45 0.48 0.61 0.71 0.86 0.95 0.98 0.99 1.00 10% 0.15 0.17 0.19 0.23 0.27 0.34 0.43 0.57 0.77 0.89 0.95 0.98 20 5% 0.16 0.18 0.21 0.25 0.30 0.38 0.48 0.65 0.84 0.93 0.97 0.99 1% 0.19 0.23 0.25 0.31 0.35 0.46 0.56 0.76 0.91 0.97 0.99 1.00 10% 0.12 0.13 0.15 0.18 0.22 0.27 0.35 0.49 0.71 0.86 0.93 0.98 25 5% 0.12 0.14 0.16 0.19 0.23 0.30 0.39 0.55 0.78 0.91 0.96 1.00 1% 0.14 0.16 0.19 0.23 0.28 0.34 0.45 0.65 0.88 0.96 0.99 1.00 10% 0.10 0.11 0.12 0.15 0.18 0.22 0.29 0.41 0.64 0.83 0.92 0.98 30 5% 0.10 0.12 0.13 0.16 0.19 0.24 0.33 0.46 0.71 0.87 0.94 1.00 1% 0.12 0.13 0.16 0.18 0.23 0.29 0.39 0.55 0.80 0.94 0.98 1.00 10% 0.08 0.09 0.10 0.12 0.15 0.19 0.25 0.37 0.59 0.79 0.90 0.98 35 5% 0.09 0.10 0.11 0.13 0.16 0.21 0.28 0.42 0.66 0.84 0.94 0.99 1% 0.09 0.11 0.13 0.15 0.18 0.25 0.34 0.52 0.80 0.91 0.97 1.00 10% 0.07 0.08 0.09 0.11 0.13 0.16 0.22 0.32 0.54 0.75 0.88 0.98 40 5% 0.07 0.08 0.09 0.11 0.14 0.18 0.25 0.35 0.61 0.82 0.92 0.99 1% 0.08 0.09 0.11 0.13 0.16 0.21 0.30 0.43 0.73 0.88 0.96 1.00 10% 0.06 0.07 0.08 0.09 0.11 0.15 0.20 0.29 0.52 0.72 0.86 0.98 45 5% 0.06 0.07 0.08 0.10 0.12 0.16 0.22 0.32 0.59 0.78 0.91 0.99 1% 0.07 0.08 0.09 0.11 0.14 0.18 0.26 0.40 0.68 0.85 0.96 1.00 10% 0.05 0.06 0.07 0.08 0.10 0.13 0.18 0.26 0.46 0.71 0.86 0.97 50 5% 0.06 0.06 0.07 0.09 0.11 0.14 0.19 0.29 0.52 0.76 0.90 0.99 1% 0.06 0.07 0.08 0.10 0.12 0.16 0.24 0.36 0.64 0.86 0.95 1.00 10% 0.04 0.05 0.06 0.07 0.08 0.10 0.15 0.22 0.41 0.62 0.82 0.98 60 5% 0.05 0.05 0.06 0.07 0.09 0.11 0.16 0.24 0.45 0.68 0.87 0.99 1% 0.05 0.06 0.06 0.08 0.10 0.13 0.19 0.29 0.56 0.80 0.92 1.00 10% 0.04 0.04 0.05 0.06 0.07 0.09 0.12 0.19 0.36 0.59 0.79 0.98 70 5% 0.04 0.04 0.05 0.06 0.07 0.10 0.14 0.21 0.40 0.68 0.84 0.99 1% 0.04 0.05 0.05 0.07 0.08 0.11 0.16 0.24 0.52 0.80 0.92 1.00 10% 0.03 0.04 0.04 0.05 0.06 0.08 0.10 0.16 0.32 0.54 0.75 0.97 80 5% 0.03 0.04 0.04 0.05 0.06 0.08 0.11 0.18 0.36 0.61 0.83 0.99 1% 0.04 0.04 0.05 0.06 0.07 0.09 0.13 0.20 0.43 0.72 0.90 1.00 10% 0.03 0.03 0.04 0.04 0.05 0.07 0.09 0.14 0.28 0.50 0.72 0.96 90 5% 0.03 0.03 0.04 0.05 0.06 0.07 0.10 0.16 0.31 0.57 0.80 0.99 1% 0.03 0.03 0.04 0.05 0.06 0.08 0.12 0.19 0.37 0.65 0.88 1.00 10% 0.03 0.03 0.03 0.04 0.05 0.06 0.08 0.13 0.26 0.46 0.68 0.97 100 5% 0.03 0.03 0.03 0.04 0.05 0.06 0.09 0.14 0.28 0.51 0.76 0.99 1% 0.03 0.03 0.04 0.04 0.05 0.07 0.10 0.17 0.33 0.63 0.86 1.00 10% 0.02 0.03 0.03 0.03 0.04 0.05 0.08 0.12 0.24 0.42 0.67 0.97 110 5% 0.02 0.03 0.03 0.04 0.04 0.06 0.08 0.12 0.26 0.47 0.74 0.99 1% 0.02 0.03 0.03 0.04 0.05 0.06 0.09 0.15 0.31 0.58 0.83 1.00 10% 0.02 0.02 0.03 0.03 0.04 0.05 0.07 0.11 0.22 0.40 0.64 0.97 120 5% 0.02 0.02 0.03 0.03 0.04 0.05 0.07 0.11 0.24 0.45 0.71 0.99 1% 0.02 0.03 0.03 0.04 0.04 0.05 0.08 0.14 0.29 0.53 0.78 1.00 10% 0.02 0.02 0.02 0.03 0.04 0.05 0.06 0.10 0.20 0.38 0.61 0.98 130 5% 0.02 0.02 0.03 0.03 0.04 0.05 0.07 0.10 0.22 0.42 0.65 0.99 1% 0.02 0.02 0.03 0.03 0.04 0.05 0.07 0.12 0.27 0.49 0.75 1.00 10% 0.02 0.02 0.02 0.03 0.03 0.04 0.06 0.09 0.19 0.36 0.58 0.97 140 5% 0.02 0.02 0.02 0.03 0.03 0.04 0.06 0.10 0.20 0.40 0.64 0.99 1% 0.02 0.02 0.02 0.03 0.04 0.05 0.07 0.11 0.24 0.47 0.74 1.00 10% 0.02 0.02 0.02 0.03 0.03 0.04 0.05 0.08 0.17 0.33 0.56 0.97 150 5% 0.02 0.02 0.02 0.03 0.03 0.04 0.06 0.09 0.19 0.36 0.63 0.99 1% 0.02 0.02 0.02 0.03 0.03 0.04 0.06 0.10 0.22 0.43 0.71 1.00

61

Appendix A.1.4: Cut-off points for the test based on the A statistic  n Level of percentile 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.975 0.999 10% 0.86 0.87 0.86 0.86 0.87 0.88 0.89 0.85 0.72 0.40 0.19 0.00 5 5% 0.90 0.90 0.91 0.90 0.92 0.93 0.94 0.91 0.89 0.71 0.47 0.00 1% 0.94 0.96 0.96 0.96 0.97 0.97 0.98 0.97 0.99 0.98 0.95 0.03 10% 0.78 0.79 0.80 0.83 0.85 0.87 0.89 0.88 0.87 0.74 0.42 0.00 10 5% 0.81 0.82 0.84 0.86 0.89 0.90 0.92 0.93 0.95 0.87 0.67 0.01 1% 0.87 0.87 0.88 0.91 0.93 0.95 0.97 0.97 0.98 0.99 0.97 0.18 10% 0.73 0.75 0.78 0.82 0.83 0.87 0.90 0.91 0.91 0.86 0.57 0.00 15 5% 0.75 0.77 0.80 0.84 0.87 0.89 0.91 0.94 0.95 0.94 0.85 0.01 1% 0.79 0.82 0.84 0.89 0.90 0.93 0.95 0.97 0.98 0.98 0.98 0.31 10% 0.70 0.73 0.76 0.80 0.83 0.86 0.89 0.92 0.92 0.88 0.70 0.00 20 5% 0.73 0.75 0.79 0.82 0.85 0.88 0.91 0.94 0.96 0.94 0.91 0.02 1% 0.76 0.80 0.82 0.85 0.89 0.91 0.94 0.97 0.98 0.99 0.99 0.43 10% 0.68 0.72 0.75 0.80 0.83 0.86 0.89 0.92 0.94 0.92 0.79 0.01 25 5% 0.70 0.74 0.78 0.81 0.85 0.88 0.91 0.94 0.96 0.96 0.93 0.02 1% 0.74 0.77 0.81 0.84 0.89 0.90 0.93 0.96 0.98 0.99 0.99 0.50 10% 0.67 0.71 0.75 0.79 0.83 0.86 0.89 0.92 0.94 0.93 0.82 0.01 30 5% 0.69 0.73 0.77 0.80 0.84 0.87 0.91 0.93 0.96 0.97 0.95 0.05 1% 0.74 0.76 0.80 0.83 0.86 0.90 0.93 0.96 0.98 0.99 0.99 0.44 10% 0.66 0.71 0.74 0.79 0.82 0.86 0.89 0.92 0.94 0.93 0.89 0.01 35 5% 0.69 0.72 0.76 0.80 0.84 0.87 0.90 0.94 0.96 0.96 0.96 0.05 1% 0.72 0.76 0.79 0.83 0.86 0.90 0.93 0.96 0.98 0.99 0.99 0.74 10% 0.66 0.70 0.73 0.78 0.82 0.85 0.89 0.92 0.95 0.94 0.88 0.01 40 5% 0.67 0.71 0.75 0.79 0.83 0.87 0.90 0.93 0.96 0.97 0.96 0.05 1% 0.71 0.75 0.79 0.82 0.85 0.88 0.92 0.95 0.98 0.99 0.99 0.77 10% 0.65 0.69 0.73 0.77 0.81 0.85 0.89 0.92 0.95 0.96 0.92 0.02 45 5% 0.66 0.70 0.75 0.79 0.83 0.86 0.91 0.94 0.96 0.97 0.97 0.09 1% 0.70 0.75 0.77 0.81 0.85 0.89 0.92 0.95 0.98 0.99 0.99 0.69 10% 0.64 0.68 0.73 0.77 0.81 0.85 0.89 0.92 0.95 0.96 0.91 0.02 50 5% 0.66 0.70 0.74 0.79 0.82 0.86 0.90 0.93 0.96 0.97 0.97 0.07 1% 0.70 0.73 0.77 0.81 0.84 0.89 0.92 0.95 0.98 0.99 0.99 0.49 10% 0.63 0.68 0.72 0.76 0.81 0.85 0.89 0.92 0.96 0.96 0.94 0.04 60 5% 0.65 0.69 0.73 0.78 0.82 0.86 0.90 0.93 0.96 0.97 0.98 0.18 1% 0.68 0.72 0.75 0.80 0.85 0.88 0.92 0.95 0.98 0.99 0.99 0.82 10% 0.63 0.67 0.71 0.76 0.80 0.84 0.88 0.92 0.96 0.97 0.95 0.04 70 5% 0.64 0.68 0.72 0.77 0.81 0.85 0.90 0.93 0.96 0.98 0.98 0.14 1% 0.67 0.70 0.75 0.79 0.83 0.87 0.91 0.94 0.97 0.99 0.99 0.69 10% 0.62 0.67 0.71 0.76 0.80 0.84 0.88 0.92 0.96 0.97 0.97 0.05 80 5% 0.63 0.67 0.72 0.77 0.81 0.86 0.89 0.93 0.97 0.98 0.98 0.18 1% 0.66 0.70 0.74 0.79 0.83 0.88 0.91 0.94 0.98 0.99 0.99 0.78 10% 0.61 0.66 0.70 0.75 0.80 0.84 0.88 0.92 0.96 0.97 0.97 0.06 90 5% 0.63 0.67 0.72 0.76 0.81 0.85 0.89 0.93 0.96 0.98 0.99 0.18 1% 0.65 0.69 0.73 0.78 0.83 0.87 0.91 0.94 0.97 0.99 0.99 0.77 10% 0.61 0.65 0.70 0.75 0.79 0.84 0.88 0.92 0.96 0.97 0.97 0.09 100 5% 0.62 0.66 0.71 0.76 0.80 0.85 0.89 0.93 0.96 0.98 0.98 0.24 1% 0.64 0.69 0.73 0.78 0.82 0.87 0.91 0.94 0.97 0.99 0.99 0.92 10% 0.61 0.65 0.70 0.75 0.79 0.84 0.88 0.92 0.96 0.97 0.97 0.10 110 5% 0.62 0.66 0.71 0.76 0.80 0.84 0.89 0.93 0.96 0.98 0.98 0.32 1% 0.64 0.68 0.73 0.78 0.82 0.86 0.90 0.94 0.97 0.99 0.99 0.90 10% 0.60 0.65 0.70 0.75 0.79 0.83 0.88 0.92 0.96 0.97 0.97 0.10 120 5% 0.62 0.66 0.71 0.76 0.80 0.84 0.89 0.93 0.96 0.98 0.98 0.33 1% 0.63 0.68 0.73 0.77 0.82 0.85 0.90 0.94 0.97 0.99 0.99 0.97 10% 0.60 0.65 0.70 0.74 0.79 0.84 0.88 0.92 0.96 0.97 0.98 0.15 130 5% 0.61 0.66 0.71 0.75 0.80 0.84 0.89 0.93 0.96 0.98 0.99 0.34 1% 0.63 0.68 0.72 0.77 0.82 0.86 0.90 0.94 0.97 0.99 0.99 0.87 10% 0.60 0.65 0.70 0.74 0.79 0.83 0.88 0.92 0.96 0.97 0.98 0.16 140 5% 0.61 0.66 0.71 0.75 0.80 0.84 0.88 0.92 0.96 0.98 0.99 0.44 1% 0.62 0.67 0.72 0.77 0.82 0.85 0.90 0.94 0.97 0.99 0.99 0.94 10% 0.60 0.64 0.69 0.74 0.78 0.83 0.88 0.92 0.96 0.97 0.98 0.18 150 5% 0.61 0.65 0.70 0.75 0.80 0.84 0.88 0.92 0.96 0.98 0.99 0.48 1% 0.62 0.67 0.72 0.77 0.81 0.85 0.89 0.93 0.97 0.99 0.99 0.96

61

APPENDIX A.2 R subroutine tests of discordancy in circular data library(circular) #------C statistic------# ctest<-function(theta){ n<-length(theta) Rbar<-rho.circular(theta) Rbari=c() Ci=c() for (i in 1:n){ thetai=theta[-c(i)] Rbari[i]<-rho.circular(thetai) Ci[i]=(Rbari[i]-Rbar)/Rbar Ci} C=max(Ci) Position=match(C,Ci) Result<-c(C,Position) return(Result)} #------D statistic------# dtest<-function(theta){ thetaa=sort(theta) n<-length(thetaa) Ti=matrix(0,nrow=n-1,ncol=n) Di=matrix(0,nrow=n-1,ncol=n) for(j in 1:n){ thetaj=thetaa[-c(j)] k=length(thetaj) for(i in 1:(k-1)){ Ti[i,j]<-(thetaj[i+1]-thetaj[i])} Ti[k,j]<-((2*pi)-thetaj[k]+thetaj[1])} D=max(Ti) PositionofD=(which(Ti==max(Ti), arr.ind=TRUE)[2]) ti=c() di=c() for(i in 1:(n-1)){ ti[i]<-(thetaa[i+1]-thetaa[i])} ti[n]<-((2*pi)-thetaa[n]+thetaa[1]) for(i in 2:n){di[i]<-(ti[i]/ti[i-1])} di[1]<-(ti[1]/ti[n]) d=min(di[PositionofD],1/di[PositionofD]) PositionofOutlier=match(thetaa[PositionofD],theta) Result<-c(d,PositionofOutlier) return(Result)} #------M statistic------# mtest<-function(theta){ n<-length(theta) Ri=c() k=c() R<-rho.circular(theta)*n for (i in 1:n){ thetai=theta[-c(i)] Ri[i]<-rho.circular(thetai)*(n-1) k[i]<-((Ri[i]-R+1)/(n-R)) k} m=max(k) Position=match(m,k) Result<-c(m,Position) return(Result)}

62

#------A statistic------# atest<-function(theta){ n<-length(theta) k=c() Dj=c() w=c() for (j in 1:n){ for (i in 1:n){ k[i]= 1-cos(theta[i]-theta[j]) k} Dj[j]=sum(k)} w=Dj/(2*(n-1)) A=max(w) Position=match(A,w) Result<-c(A,Position) return(Result)}

63

APPENDIX A.3 R subroutine for obtaining the Cut-off points for the tests of discordancy library(circular) simulation<-function(n,rho,R){ ci=c() di=c() mi=c() ai=c() theta=matrix(0,nrow=R,ncol=n) Result=matrix(0,nrow=R,ncol=4) Resultsort=matrix(0,nrow=R,ncol=4) output=matrix(0,nrow=3,ncol=4) for (i in 1:R){ theta[i,]=rwrappedcauchy(n=n,mu=circular(0),rho=rho,control.circular=list(units="rad ians")) ci[i]=ctest(theta[i,]) di[i]=dtest(theta[i,]) mi[i]=mtest(theta[i,]) ai[i]=atest(theta[i,]) Result[i,]=c(ci[i],di[i],mi[i],ai[i])} Resultsort[,1]=sort(Result[,1]) Resultsort[,2]=sort(Result[,2]) Resultsort[,3]=sort(Result[,3]) Resultsort[,4]=sort(Result[,4]) output[1,]=c(Resultsort[(R*90/100),1],Resultsort[(R*90/100),2],Resultsort[(R*90/100),3], Resultsort[(R*90/100),4]) output[2,]=c(Resultsort[(R*95/100),1],Resultsort[(R*95/100),2],Resultsort[(R*95/100),3], Resultsort[(R*95/100),4]) output[3,]=c(Resultsort[(R*99/100),1],Resultsort[(R*99/100),2],Resultsort[(R*99/100),3], Resultsort[(R*99/100),4]) return(output)}

64

APPENDIX A.4 R subroutine for power of performance in circular data library(circular) power=function(n,rho,cutC,cutD,cutM,cutA,simu,d,lamda){ P1simuC=matrix(0,nrow=simu,ncol=2) P1simuD=matrix(0,nrow=simu,ncol=2) P1simuM=matrix(0,nrow=simu,ncol=2) P1simuA=matrix(0,nrow=simu,ncol=2) extrempointAll<-c() obex<-c()

P1yesC=0 P1yesD=0 P1yesM=0 P1yesA=0

P3yesC=0 P3yesD=0 P3yesM=0 P3yesA=0

P5yesC=0 P5yesD=0 P5yesM=0 P5yesA=0

P51yesC=0 P51yesD=0 P51yesM=0 P51yesA=0

x=matrix(0,nrow=n, ncol=simu)

#------P1------# for (i in 1:simu){ x[,i]=rwrappedcauchy(n=n, mu=circular(0), rho=rho ,control.circular=list(units="radians")) x[d,i]=rwrappedcauchy(n=1,mu=circular(lamda*pi),rho=rho,control.circular=list(units ="radians")) P1simuC[i,]=ctest(x[,i]) if(P1simuC[i,1]

#------P3------# Meandirection=mean.circular(x[,i]) extrem= pi-abs(pi-abs(x[,i]-Meandirection)) extrempointAll[i]=max(extrem) for ( j in 1:n){ if(extrem[j]==extrempointAll[i]){obex[i]=j} } if (P1simuC[i,1]>=cutC && obex[i]==d &&P1simuC[i,2]==d) {P3yesC=P3yesC+1} if (P1simuD[i,1]>=cutD && obex[i]==d &&P1simuD[i,2]==d) {P3yesD=P3yesD+1}

65

if (P1simuM[i,1]>=cutM && obex[i]==d &&P1simuM[i,2]==d) {P3yesM=P3yesM+1} if (P1simuA[i,1]>=cutA && obex[i]==d &&P1simuA[i,2]==d) {P3yesA=P3yesA+1}

#------P5------# if(obex[i]==d){P5yesC<-P5yesC+1} if(P1simuC[i,1]>=cutC && obex[i]==d && P1simuC[i,2]==d) {P51yesC<-P51yesC+1} if(obex[i]==d){P5yesD<-P5yesD+1} if(P1simuD[i,1]>=cutD && obex[i]==d && P1simuD[i,2]==d) {P51yesD<-P51yesD+1} if(obex[i]==d){P5yesM<-P5yesM+1} if(P1simuM[i,1]>=cutM && obex[i]==d && P1simuM[i,2]==d) {P51yesM<-P51yesM+1} if(obex[i]==d){P5yesA<-P5yesA+1} if(P1simuA[i,1]>=cutA && obex[i]==d && P1simuA[i,2]==d) {P51yesA<-P51yesA+1} } P1yesc=1-(P1yesC/simu) P1yesd=1-(P1yesD/simu) P1yesm=1-(P1yesM/simu) P1yesa=1-(P1yesA/simu)

P3yesc=(P3yesC/simu) P3yesd=(P3yesD/simu) P3yesm=(P3yesM/simu) P3yesa=(P3yesA/simu)

P5yesc=(P51yesC/P5yesC) P5yesd=(P51yesD/P5yesD) P5yesm=(P51yesM/P5yesM) P5yesa=(P51yesA/P5yesA)

Y=matrix(0,nrow=3,ncol=4) Y[1,]=c(P1yesc,P1yesd,P1yesm,P1yesa) Y[2,]=c(P3yesc,P3yesd,P3yesm,P3yesa) Y[3,]=c(P5yesc,P5yesd,P5yesm,P5yesa) return(Y)}

66

APPENDIX A.5 Power of Performance of Discordancy Tests Appendix A.5.1: Power of Performance for n 10 P1 P3 P5 P1-P3

 C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.09 0.11 0.11 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.10 0.07 0.07 0.05 0.00 0.01 0.00 0.00 0.2 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.06 0.09 0.10 0.08 0.00 0.00 0.00 0.00 0.3 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.12 0.08 0.09 0.10 0.00 0.00 0.00 0.00 0.4 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.09 0.07 0.09 0.08 0.00 0.00 0.00 0.00   0.2 0.5 0.01 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.07 0.09 0.10 0.00 0.01 0.00 0.00 0.6 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.14 0.09 0.09 0.09 0.00 0.00 0.00 0.00 0.7 0.01 0.01 0.02 0.02 0.01 0.01 0.02 0.02 0.14 0.09 0.12 0.12 0.00 0.00 0.00 0.00 0.8 0.01 0.01 0.01 0.02 0.01 0.01 0.01 0.02 0.12 0.07 0.09 0.13 0.00 0.00 0.00 0.00 0.9 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.14 0.08 0.10 0.12 0.00 0.00 0.00 0.00 1 0.01 0.01 0.02 0.02 0.01 0.01 0.02 0.02 0.14 0.07 0.11 0.13 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.12 0.08 0.11 0.12 0.00 0.00 0.00 0.00 0.1 0.02 0.01 0.01 0.01 0.02 0.01 0.01 0.01 0.13 0.06 0.10 0.11 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.09 0.08 0.09 0.10 0.00 0.00 0.00 0.00 0.3 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.10 0.08 0.08 0.12 0.00 0.00 0.00 0.00 0.4 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.10 0.06 0.09 0.10 0.00 0.00 0.00 0.00   0.4 0.5 0.02 0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.11 0.07 0.08 0.10 0.00 0.01 0.00 0.00 0.6 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02 0.09 0.07 0.11 0.11 0.00 0.01 0.00 0.00 0.7 0.02 0.03 0.02 0.03 0.02 0.02 0.02 0.03 0.10 0.09 0.09 0.11 0.00 0.01 0.00 0.00 0.8 0.03 0.03 0.02 0.03 0.03 0.02 0.02 0.03 0.10 0.08 0.09 0.12 0.00 0.00 0.00 0.00 0.9 0.03 0.03 0.04 0.05 0.03 0.03 0.04 0.05 0.09 0.09 0.11 0.15 0.00 0.00 0.00 0.00 1 0.03 0.03 0.04 0.05 0.03 0.03 0.04 0.05 0.09 0.08 0.11 0.15 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.12 0.06 0.07 0.09 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.09 0.07 0.10 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.09 0.07 0.07 0.10 0.00 0.00 0.00 0.00 0.3 0.02 0.01 0.01 0.01 0.02 0.01 0.01 0.01 0.09 0.08 0.08 0.07 0.00 0.00 0.00 0.00 0.4 0.02 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.10 0.07 0.06 0.09 0.00 0.00 0.00 0.00   0.6 0.5 0.03 0.03 0.02 0.02 0.03 0.02 0.02 0.02 0.10 0.08 0.09 0.08 0.00 0.01 0.00 0.00 0.6 0.04 0.04 0.04 0.03 0.04 0.03 0.04 0.03 0.11 0.09 0.11 0.09 0.00 0.01 0.00 0.00 0.7 0.05 0.05 0.04 0.05 0.05 0.04 0.04 0.05 0.12 0.10 0.10 0.11 0.00 0.01 0.00 0.00 0.8 0.08 0.06 0.04 0.08 0.08 0.06 0.04 0.08 0.16 0.12 0.09 0.16 0.00 0.01 0.00 0.00 0.9 0.11 0.09 0.05 0.11 0.11 0.09 0.05 0.11 0.19 0.16 0.10 0.20 0.00 0.01 0.00 0.00 1 0.11 0.09 0.05 0.12 0.11 0.09 0.05 0.12 0.19 0.16 0.10 0.21 0.00 0.00 0.00 0.00

67

Appendix A.5.1 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.12 0.12 0.13 0.13 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.09 0.11 0.12 0.00 0.00 0.00 0.00 0.2 0.02 0.01 0.02 0.01 0.02 0.01 0.02 0.01 0.11 0.08 0.12 0.10 0.00 0.00 0.00 0.00 0.3 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02 0.10 0.07 0.09 0.08 0.00 0.00 0.00 0.00 0.4 0.01 0.02 0.03 0.02 0.01 0.02 0.03 0.02 0.05 0.05 0.11 0.07 0.00 0.00 0.00 0.00   0.7 0.5 0.03 0.04 0.05 0.03 0.03 0.03 0.05 0.03 0.07 0.08 0.12 0.07 0.00 0.01 0.00 0.00 0.6 0.04 0.05 0.06 0.05 0.04 0.05 0.06 0.05 0.10 0.10 0.14 0.10 0.00 0.01 0.00 0.00 0.7 0.07 0.08 0.07 0.07 0.07 0.07 0.07 0.07 0.13 0.13 0.14 0.14 0.00 0.01 0.00 0.00 0.8 0.13 0.11 0.08 0.13 0.13 0.11 0.08 0.13 0.22 0.17 0.13 0.21 0.00 0.01 0.00 0.00 0.9 0.19 0.15 0.10 0.19 0.19 0.15 0.10 0.19 0.28 0.22 0.14 0.28 0.00 0.00 0.00 0.00 1 0.21 0.17 0.12 0.23 0.21 0.17 0.12 0.23 0.30 0.24 0.16 0.33 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.08 0.10 0.10 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.05 0.10 0.08 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.06 0.06 0.08 0.07 0.00 0.00 0.00 0.00 0.3 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02 0.06 0.05 0.07 0.06 0.00 0.00 0.00 0.00 0.4 0.02 0.02 0.04 0.02 0.02 0.02 0.04 0.02 0.04 0.04 0.09 0.04 0.00 0.00 0.00 0.00   0.8 0.5 0.03 0.03 0.06 0.03 0.03 0.03 0.06 0.03 0.05 0.05 0.12 0.05 0.00 0.01 0.00 0.00 0.6 0.04 0.05 0.08 0.04 0.04 0.04 0.08 0.04 0.06 0.07 0.14 0.06 0.00 0.01 0.00 0.00 0.7 0.09 0.11 0.10 0.09 0.09 0.10 0.10 0.09 0.14 0.15 0.15 0.13 0.00 0.01 0.00 0.00 0.8 0.20 0.20 0.13 0.24 0.20 0.19 0.13 0.24 0.26 0.26 0.17 0.32 0.00 0.00 0.00 0.00 0.9 0.37 0.29 0.14 0.41 0.37 0.29 0.14 0.41 0.46 0.35 0.17 0.50 0.00 0.00 0.00 0.00 1 0.47 0.32 0.14 0.46 0.47 0.32 0.14 0.46 0.56 0.38 0.16 0.55 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.07 0.08 0.11 0.07 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.05 0.05 0.06 0.05 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.02 0.01 0.01 0.01 0.02 0.01 0.03 0.02 0.06 0.03 0.00 0.00 0.00 0.00 0.3 0.02 0.02 0.05 0.02 0.02 0.02 0.05 0.02 0.03 0.03 0.09 0.03 0.00 0.00 0.00 0.00 0.4 0.02 0.02 0.10 0.02 0.02 0.02 0.10 0.02 0.02 0.03 0.15 0.03 0.00 0.00 0.00 0.00   0.9 0.5 0.02 0.03 0.15 0.02 0.02 0.03 0.15 0.02 0.03 0.04 0.21 0.03 0.00 0.00 0.00 0.00 0.6 0.04 0.07 0.21 0.05 0.04 0.06 0.21 0.05 0.06 0.08 0.27 0.06 0.00 0.01 0.00 0.00 0.7 0.10 0.17 0.25 0.09 0.10 0.16 0.25 0.09 0.12 0.19 0.29 0.11 0.00 0.01 0.00 0.00 0.8 0.35 0.41 0.27 0.46 0.35 0.41 0.27 0.46 0.39 0.47 0.30 0.53 0.00 0.00 0.00 0.00 0.9 0.78 0.58 0.29 0.73 0.78 0.58 0.29 0.73 0.85 0.63 0.31 0.80 0.00 0.00 0.00 0.00 1 0.86 0.64 0.30 0.82 0.86 0.64 0.30 0.82 0.91 0.67 0.31 0.86 0.00 0.00 0.00 0.00

68

Appendix A.5.1 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.11 0.13 0.11 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.03 0.02 0.03 0.02 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.05 0.01 0.01 0.01 0.05 0.01 0.02 0.02 0.08 0.02 0.00 0.00 0.00 0.00 0.3 0.01 0.02 0.13 0.01 0.01 0.02 0.13 0.01 0.02 0.02 0.18 0.02 0.00 0.00 0.00 0.00 0.4 0.01 0.02 0.20 0.01 0.01 0.02 0.20 0.01 0.02 0.02 0.25 0.02 0.00 0.00 0.00 0.00   0.95 0.5 0.03 0.05 0.30 0.03 0.03 0.05 0.30 0.03 0.04 0.06 0.36 0.04 0.00 0.00 0.00 0.00 0.6 0.10 0.26 0.37 0.09 0.10 0.25 0.37 0.09 0.12 0.28 0.41 0.10 0.00 0.01 0.00 0.00 0.7 0.76 0.67 0.38 0.73 0.76 0.67 0.38 0.73 0.82 0.73 0.42 0.79 0.00 0.00 0.00 0.00 0.8 0.90 0.81 0.42 0.90 0.90 0.81 0.42 0.90 0.96 0.86 0.45 0.96 0.00 0.00 0.00 0.00 0.9 0.95 0.86 0.46 0.95 0.95 0.86 0.46 0.95 0.98 0.89 0.47 0.98 0.00 0.00 0.00 0.00 1 0.96 0.87 0.44 0.96 0.96 0.87 0.44 0.96 0.98 0.88 0.45 0.98 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.10 0.10 0.11 0.09 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.03 0.01 0.01 0.01 0.03 0.01 0.02 0.02 0.04 0.02 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.14 0.01 0.01 0.01 0.14 0.01 0.01 0.01 0.17 0.01 0.00 0.00 0.00 0.00 0.3 0.02 0.02 0.29 0.02 0.02 0.02 0.29 0.02 0.02 0.03 0.34 0.02 0.00 0.00 0.00 0.00 0.4 0.09 0.38 0.40 0.08 0.09 0.38 0.40 0.08 0.10 0.42 0.44 0.09 0.00 0.00 0.00 0.00   0.975 0.5 0.90 0.81 0.49 0.89 0.90 0.81 0.49 0.89 0.97 0.87 0.53 0.96 0.00 0.00 0.00 0.00 0.6 0.93 0.89 0.55 0.93 0.93 0.89 0.55 0.93 0.99 0.94 0.58 0.99 0.00 0.00 0.00 0.00 0.7 0.95 0.92 0.58 0.95 0.95 0.92 0.58 0.95 0.99 0.96 0.61 0.99 0.00 0.00 0.00 0.00 0.8 0.97 0.95 0.61 0.97 0.97 0.95 0.61 0.97 0.99 0.97 0.62 0.99 0.00 0.00 0.00 0.00 0.9 0.98 0.96 0.63 0.98 0.98 0.96 0.63 0.98 0.99 0.97 0.64 0.99 0.00 0.00 0.00 0.00 1 0.99 0.97 0.63 0.99 0.99 0.97 0.63 0.99 1.00 0.97 0.64 1.00 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.10 0.10 0.01 0.10 0.10 0.09 0.01 0.10 1.00 0.91 0.11 1.00 0.00 0.00 0.00 0.00 0.1 0.98 0.98 0.89 0.98 0.98 0.98 0.89 0.98 1.00 1.00 0.90 1.00 0.00 0.00 0.00 0.00 0.2 0.99 0.99 0.94 0.99 0.99 0.99 0.94 0.99 1.00 1.00 0.95 1.00 0.00 0.00 0.00 0.00 0.3 0.99 0.99 0.95 0.99 0.99 0.99 0.95 0.99 1.00 1.00 0.96 1.00 0.00 0.00 0.00 0.00 0.4 1.00 1.00 0.97 1.00 1.00 1.00 0.97 1.00 1.00 1.00 0.97 1.00 0.00 0.00 0.00 0.00   0.999 0.5 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 0.00 0.00 0.00 0.00 0.6 0.99 0.99 0.98 0.99 0.99 0.99 0.98 0.99 1.00 1.00 0.98 1.00 0.00 0.00 0.00 0.00 0.7 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 0.00 0.00 0.00 0.00 0.8 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 0.00 0.00 0.00 0.00 0.9 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 0.00 0.00 0.00 0.00 1 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.98 1.00 0.00 0.00 0.00 0.00

69

Appendix A.5.2: Power of Performance for n  30 P1 P3 P5 P1-P3

 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.13 0.02 0.12 0.09 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.08 0.04 0.23 0.18 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.09 0.04 0.21 0.17 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.11 0.03 0.14 0.10 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.11 0.03 0.14 0.12 0.00 0.00 0.00 0.00   0.2 0.5 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.01 0.11 0.01 0.25 0.18 0.00 0.00 0.00 0.00 0.6 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.03 0.19 0.15 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.06 0.02 0.21 0.15 0.00 0.00 0.00 0.00 0.8 0.00 0.01 0.01 0.01 0.00 0.00 0.01 0.01 0.07 0.03 0.16 0.11 0.00 0.00 0.00 0.00 0.9 0.01 0.01 0.02 0.01 0.01 0.00 0.02 0.01 0.10 0.02 0.25 0.20 0.00 0.00 0.00 0.00 1 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.01 0.19 0.14 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.04 0.07 0.16 0.13 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.02 0.13 0.09 0.00 0.00 0.00 0.00 0.2 0.00 0.01 0.01 0.01 0.00 0.00 0.01 0.01 0.07 0.03 0.15 0.14 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.05 0.11 0.09 0.00 0.00 0.00 0.00 0.4 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.05 0.11 0.10 0.00 0.00 0.00 0.00   0.4 0.5 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.01 0.12 0.02 0.13 0.13 0.00 0.00 0.00 0.00 0.6 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.04 0.13 0.15 0.00 0.00 0.00 0.00 0.7 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.04 0.13 0.14 0.00 0.01 0.00 0.00 0.8 0.01 0.01 0.02 0.02 0.01 0.01 0.02 0.02 0.09 0.04 0.14 0.13 0.00 0.00 0.00 0.00 0.9 0.01 0.01 0.02 0.02 0.01 0.01 0.02 0.02 0.09 0.04 0.13 0.13 0.00 0.01 0.00 0.00 1 0.01 0.01 0.02 0.02 0.01 0.00 0.02 0.02 0.10 0.02 0.12 0.11 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.13 0.05 0.16 0.16 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.25 0.04 0.12 0.16 0.00 0.00 0.00 0.00 0.2 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.18 0.03 0.14 0.13 0.00 0.00 0.00 0.00 0.3 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.13 0.06 0.15 0.17 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.03 0.17 0.15 0.00 0.00 0.00 0.00   0.6 0.5 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.13 0.04 0.10 0.10 0.00 0.01 0.00 0.00 0.6 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.09 0.05 0.13 0.12 0.00 0.01 0.00 0.00 0.7 0.02 0.01 0.03 0.02 0.02 0.01 0.03 0.02 0.11 0.05 0.17 0.14 0.00 0.01 0.00 0.00 0.8 0.04 0.02 0.04 0.03 0.04 0.01 0.04 0.03 0.17 0.05 0.15 0.14 0.00 0.01 0.00 0.00 0.9 0.05 0.03 0.05 0.06 0.05 0.02 0.05 0.06 0.18 0.06 0.17 0.19 0.00 0.01 0.00 0.00 1 0.07 0.03 0.04 0.06 0.07 0.02 0.04 0.06 0.21 0.06 0.13 0.17 0.00 0.01 0.00 0.00

71

Appendix A.5.2 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.06 0.03 0.18 0.04 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.12 0.05 0.10 0.19 0.00 0.00 0.00 0.00 0.2 0.00 0.01 0.01 0.01 0.00 0.00 0.01 0.01 0.09 0.07 0.12 0.13 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.17 0.03 0.08 0.07 0.00 0.00 0.00 0.00 0.4 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.13 0.06 0.13 0.08 0.00 0.00 0.00 0.00   0.7 0.5 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.09 0.06 0.11 0.09 0.00 0.01 0.00 0.00 0.6 0.02 0.02 0.02 0.01 0.02 0.01 0.02 0.01 0.11 0.05 0.13 0.07 0.00 0.01 0.00 0.00 0.7 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02 0.10 0.04 0.09 0.07 0.00 0.01 0.00 0.00 0.8 0.04 0.03 0.04 0.03 0.04 0.02 0.04 0.03 0.13 0.06 0.11 0.10 0.00 0.01 0.00 0.00 0.9 0.07 0.04 0.05 0.07 0.07 0.04 0.05 0.07 0.17 0.09 0.12 0.15 0.00 0.01 0.00 0.00 1 0.10 0.04 0.05 0.08 0.10 0.03 0.05 0.08 0.21 0.07 0.10 0.15 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.18 0.09 0.10 0.08 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.08 0.13 0.08 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.13 0.08 0.11 0.14 0.00 0.00 0.00 0.00 0.3 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.10 0.09 0.13 0.13 0.00 0.00 0.00 0.00 0.4 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.04 0.07 0.07 0.00 0.00 0.00 0.00   0.8 0.5 0.01 0.02 0.02 0.01 0.01 0.01 0.02 0.01 0.07 0.05 0.12 0.07 0.00 0.01 0.00 0.00 0.6 0.02 0.02 0.03 0.02 0.02 0.01 0.03 0.02 0.06 0.04 0.12 0.06 0.00 0.01 0.00 0.00 0.7 0.03 0.04 0.04 0.03 0.03 0.03 0.04 0.03 0.07 0.07 0.12 0.07 0.00 0.02 0.00 0.00 0.8 0.06 0.07 0.07 0.07 0.06 0.06 0.07 0.07 0.12 0.13 0.15 0.14 0.00 0.01 0.00 0.00 0.9 0.16 0.08 0.08 0.16 0.16 0.08 0.08 0.16 0.26 0.13 0.12 0.27 0.00 0.01 0.00 0.00 1 0.25 0.09 0.09 0.26 0.25 0.09 0.09 0.26 0.36 0.12 0.13 0.37 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.01 0.20 0.11 0.17 0.18 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.18 0.07 0.12 0.09 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.12 0.05 0.07 0.05 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.05 0.02 0.05 0.03 0.00 0.00 0.00 0.00   0.9 0.4 0.01 0.01 0.02 0.00 0.01 0.01 0.02 0.00 0.04 0.02 0.07 0.02 0.00 0.00 0.00 0.00 0.5 0.02 0.02 0.04 0.01 0.02 0.01 0.04 0.01 0.05 0.03 0.11 0.03 0.00 0.01 0.00 0.00 0.6 0.02 0.03 0.07 0.01 0.02 0.01 0.07 0.01 0.04 0.03 0.15 0.02 0.00 0.02 0.00 0.00 0.7 0.04 0.08 0.10 0.02 0.04 0.06 0.10 0.02 0.08 0.10 0.17 0.04 0.00 0.03 0.00 0.00 0.8 0.18 0.15 0.13 0.06 0.18 0.14 0.13 0.06 0.25 0.20 0.18 0.08 0.00 0.01 0.00 0.00 0.9 0.63 0.22 0.14 0.31 0.63 0.22 0.14 0.31 0.78 0.27 0.17 0.38 0.00 0.00 0.00 0.00 1 0.80 0.27 0.15 0.52 0.80 0.27 0.15 0.52 0.91 0.31 0.17 0.60 0.00 0.00 0.00 0.00

71

Appendix A.5.2 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.18 0.07 0.07 0.07 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.09 0.04 0.06 0.03 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.02 0.01 0.02 0.01 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.02 0.00 0.01 0.00 0.02 0.00 0.03 0.01 0.04 0.01 0.00 0.00 0.00 0.00 0.4 0.01 0.01 0.05 0.00 0.01 0.00 0.05 0.00 0.02 0.01 0.09 0.01 0.00 0.00 0.00 0.00   0.95 0.5 0.02 0.01 0.10 0.01 0.02 0.01 0.10 0.01 0.03 0.01 0.16 0.01 0.00 0.00 0.00 0.00 0.6 0.03 0.04 0.14 0.01 0.03 0.02 0.14 0.01 0.04 0.02 0.20 0.02 0.00 0.02 0.00 0.00 0.7 0.11 0.09 0.17 0.02 0.11 0.08 0.17 0.02 0.14 0.10 0.22 0.03 0.00 0.02 0.00 0.00 0.8 0.77 0.25 0.20 0.09 0.77 0.25 0.20 0.09 0.92 0.29 0.23 0.10 0.00 0.01 0.00 0.00 0.9 0.89 0.46 0.22 0.73 0.89 0.45 0.22 0.73 0.97 0.50 0.24 0.80 0.00 0.00 0.00 0.00 1 0.94 0.51 0.23 0.88 0.94 0.51 0.23 0.88 0.98 0.52 0.23 0.91 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.08 0.12 0.10 0.00 0.00 0.00 0.00 0.1 0.01 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.03 0.02 0.01 0.02 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.01 0.02 0.01 0.00 0.00 0.00 0.00 0.3 0.01 0.01 0.04 0.01 0.01 0.01 0.04 0.01 0.01 0.01 0.06 0.01 0.00 0.00 0.00 0.00 0.4 0.01 0.01 0.10 0.01 0.01 0.01 0.10 0.01 0.02 0.01 0.15 0.01 0.00 0.00 0.00 0.00   0.975 0.5 0.02 0.02 0.17 0.01 0.02 0.01 0.17 0.01 0.02 0.01 0.21 0.01 0.00 0.01 0.00 0.00 0.6 0.06 0.08 0.23 0.02 0.06 0.07 0.23 0.02 0.07 0.08 0.28 0.02 0.00 0.01 0.00 0.00 0.7 0.85 0.39 0.27 0.15 0.85 0.38 0.27 0.15 0.96 0.43 0.31 0.17 0.00 0.01 0.00 0.00 0.8 0.91 0.71 0.30 0.90 0.91 0.70 0.30 0.90 0.99 0.76 0.32 0.97 0.00 0.00 0.00 0.00 0.9 0.95 0.79 0.33 0.95 0.95 0.79 0.33 0.95 0.99 0.82 0.35 0.98 0.00 0.00 0.00 0.00 1 0.98 0.80 0.34 0.98 0.98 0.80 0.34 0.98 0.99 0.80 0.35 0.99 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.03 0.00 0.00 0.00 0.03 0.00 0.00 0.00 1.00 0.00 0.07 0.05 0.00 0.00 0.00 0.00 0.1 0.94 0.00 0.60 0.94 0.94 0.00 0.60 0.94 1.00 0.00 0.63 1.00 0.00 0.00 0.00 0.00 0.2 0.97 0.00 0.79 0.97 0.97 0.00 0.79 0.97 1.00 0.00 0.82 1.00 0.00 0.00 0.00 0.00 0.3 0.98 0.00 0.85 0.98 0.98 0.00 0.85 0.98 1.00 0.00 0.87 1.00 0.00 0.00 0.00 0.00 0.4 0.99 0.00 0.89 0.99 0.99 0.00 0.89 0.99 1.00 0.00 0.90 1.00 0.00 0.00 0.00 0.00   0.999 0.5 0.99 0.94 0.91 0.99 0.99 0.94 0.91 0.99 1.00 0.95 0.92 1.00 0.00 0.00 0.00 0.00 0.6 0.99 0.98 0.92 0.99 0.99 0.98 0.92 0.99 1.00 0.99 0.93 1.00 0.00 0.00 0.00 0.00 0.7 1.00 0.99 0.94 1.00 1.00 0.99 0.94 1.00 1.00 0.99 0.94 1.00 0.00 0.00 0.00 0.00 0.8 1.00 0.99 0.94 1.00 1.00 0.99 0.94 1.00 1.00 0.99 0.94 1.00 0.00 0.00 0.00 0.00 0.9 1.00 0.99 0.94 1.00 1.00 0.99 0.94 1.00 1.00 1.00 0.94 1.00 0.00 0.00 0.00 0.00 1 1.00 1.00 0.95 1.00 1.00 1.00 0.95 1.00 1.00 1.00 0.95 1.00 0.00 0.00 0.00 0.00

72

Appendix A.5.3: Power of Performance for n  50 P1 P3 P5 P1-P3

 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.08 0.00 0.08 0.08 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 0.01 0.12 0.13 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.04 0.12 0.12 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.01 0.10 0.10 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.01 0.09 0.09 0.00 0.00 0.00 0.00   0.2 0.5 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.19 0.00 0.12 0.12 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 0.02 0.08 0.08 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.08 0.03 0.15 0.14 0.00 0.00 0.00 0.00 0.8 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.05 0.03 0.13 0.13 0.00 0.00 0.00 0.00 0.9 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.12 0.03 0.16 0.16 0.00 0.00 0.00 0.00 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.02 0.11 0.11 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.02 0.10 0.05 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.20 0.04 0.13 0.11 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.06 0.17 0.11 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.22 0.03 0.13 0.11 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.22 0.02 0.17 0.10 0.00 0.00 0.00 0.00   0.4 0.5 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.20 0.01 0.13 0.07 0.00 0.00 0.00 0.00 0.6 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.22 0.04 0.16 0.12 0.00 0.00 0.00 0.00 0.7 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.22 0.03 0.16 0.12 0.00 0.00 0.00 0.00 0.8 0.01 0.00 0.01 0.01 0.01 0.00 0.01 0.01 0.19 0.03 0.17 0.13 0.00 0.00 0.00 0.00 0.9 0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01 0.25 0.03 0.11 0.08 0.00 0.01 0.00 0.00 1 0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01 0.23 0.03 0.11 0.08 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 0.00 0.11 0.02 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.16 0.10 0.17 0.09 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.17 0.04 0.21 0.14 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.09 0.09 0.20 0.10 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.10 0.03 0.26 0.16 0.00 0.00 0.00 0.00   0.6 0.5 0.00 0.00 0.01 0.01 0.00 0.00 0.01 0.01 0.12 0.05 0.19 0.15 0.00 0.00 0.00 0.00 0.6 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.10 0.05 0.20 0.11 0.00 0.00 0.00 0.00 0.7 0.01 0.01 0.02 0.01 0.01 0.00 0.02 0.01 0.14 0.04 0.17 0.10 0.00 0.00 0.00 0.00 0.8 0.01 0.01 0.03 0.02 0.01 0.01 0.03 0.02 0.09 0.04 0.19 0.11 0.00 0.01 0.00 0.00 0.9 0.03 0.01 0.03 0.02 0.03 0.01 0.03 0.02 0.13 0.04 0.13 0.08 0.00 0.01 0.00 0.00 1 0.03 0.02 0.05 0.04 0.03 0.01 0.05 0.04 0.11 0.06 0.19 0.15 0.00 0.01 0.00 0.00

73

Appendix A.5.3 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.28 0.05 0.09 0.15 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.23 0.03 0.15 0.12 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.22 0.00 0.15 0.21 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.31 0.10 0.07 0.07 0.00 0.00 0.00 0.00 0.4 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.32 0.05 0.15 0.15 0.00 0.00 0.00 0.00   0.7 0.5 0.01 0.01 0.00 0.00 0.01 0.00 0.00 0.00 0.26 0.05 0.09 0.08 0.00 0.00 0.00 0.00 0.6 0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01 0.29 0.05 0.17 0.14 0.00 0.01 0.00 0.00 0.7 0.03 0.02 0.02 0.02 0.03 0.01 0.02 0.02 0.26 0.05 0.17 0.12 0.00 0.01 0.00 0.00 0.8 0.05 0.02 0.03 0.03 0.05 0.01 0.03 0.03 0.25 0.05 0.14 0.14 0.00 0.01 0.00 0.00 0.9 0.12 0.02 0.04 0.05 0.12 0.02 0.04 0.05 0.37 0.05 0.11 0.16 0.00 0.01 0.00 0.00 1 0.15 0.03 0.05 0.08 0.15 0.02 0.05 0.08 0.40 0.05 0.12 0.19 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.78 0.07 0.13 0.15 0.00 0.00 0.00 0.00 0.1 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.72 0.06 0.16 0.14 0.00 0.00 0.00 0.00 0.2 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.76 0.06 0.10 0.14 0.00 0.00 0.00 0.00 0.3 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.67 0.04 0.07 0.11 0.00 0.00 0.00 0.00 0.4 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.47 0.02 0.09 0.10 0.00 0.00 0.00 0.00   0.8 0.5 0.03 0.01 0.01 0.00 0.03 0.00 0.01 0.00 0.44 0.05 0.11 0.07 0.00 0.01 0.00 0.00 0.6 0.04 0.01 0.01 0.01 0.04 0.00 0.01 0.01 0.39 0.03 0.13 0.06 0.00 0.01 0.00 0.00 0.7 0.09 0.02 0.02 0.01 0.09 0.01 0.02 0.01 0.48 0.03 0.11 0.07 0.00 0.02 0.00 0.00 0.8 0.25 0.03 0.04 0.03 0.25 0.03 0.04 0.03 0.75 0.08 0.12 0.08 0.00 0.01 0.00 0.00 0.9 0.46 0.04 0.04 0.08 0.46 0.04 0.04 0.08 0.93 0.08 0.09 0.16 0.00 0.01 0.00 0.00 1 0.57 0.04 0.05 0.13 0.57 0.04 0.05 0.13 0.97 0.07 0.08 0.22 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.47 0.05 0.11 0.20 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.07 0.13 0.07 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.33 0.06 0.03 0.13 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.10 0.01 0.06 0.03 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.08 0.01 0.05 0.01 0.00 0.00 0.00 0.00   0.9 0.5 0.02 0.01 0.02 0.00 0.02 0.00 0.02 0.00 0.09 0.02 0.08 0.02 0.00 0.00 0.00 0.00 0.6 0.03 0.02 0.04 0.01 0.03 0.00 0.04 0.01 0.08 0.01 0.11 0.02 0.00 0.02 0.00 0.00 0.7 0.05 0.05 0.06 0.01 0.05 0.03 0.06 0.01 0.12 0.06 0.14 0.03 0.00 0.03 0.00 0.00 0.8 0.23 0.08 0.08 0.03 0.23 0.07 0.08 0.03 0.40 0.12 0.14 0.06 0.00 0.01 0.00 0.00 0.9 0.64 0.13 0.10 0.17 0.64 0.12 0.10 0.17 0.89 0.17 0.14 0.23 0.00 0.00 0.00 0.00 1 0.80 0.14 0.09 0.37 0.80 0.14 0.09 0.37 0.96 0.17 0.11 0.44 0.00 0.00 0.00 0.00

74

Appendix A.5.3 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.28 0.05 0.13 0.20 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.11 0.05 0.05 0.07 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.05 0.03 0.03 0.03 0.00 0.00 0.00 0.00 0.3 0.01 0.01 0.01 0.00 0.01 0.00 0.01 0.00 0.03 0.02 0.03 0.02 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.02 0.00 0.01 0.00 0.02 0.00 0.02 0.01 0.06 0.01 0.00 0.00 0.00 0.00   0.95 0.5 0.01 0.01 0.04 0.01 0.01 0.01 0.04 0.01 0.02 0.01 0.09 0.02 0.00 0.01 0.00 0.00 0.6 0.01 0.03 0.07 0.01 0.01 0.01 0.07 0.01 0.02 0.02 0.13 0.01 0.00 0.02 0.00 0.00 0.7 0.03 0.07 0.11 0.02 0.03 0.05 0.11 0.02 0.04 0.07 0.16 0.02 0.00 0.03 0.00 0.00 0.8 0.09 0.17 0.13 0.04 0.09 0.16 0.13 0.04 0.12 0.21 0.18 0.05 0.00 0.01 0.00 0.00 0.9 0.75 0.28 0.14 0.47 0.75 0.28 0.14 0.47 0.88 0.32 0.17 0.55 0.00 0.00 0.00 0.00 1 0.90 0.33 0.15 0.79 0.90 0.33 0.15 0.79 0.95 0.35 0.16 0.84 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.25 0.06 0.15 0.09 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.06 0.02 0.02 0.03 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.02 0.01 0.01 0.01 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.02 0.00 0.01 0.00 0.02 0.00 0.01 0.00 0.04 0.01 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.06 0.00 0.01 0.00 0.06 0.00 0.01 0.00 0.10 0.00 0.00 0.00 0.00 0.00   0.975 0.5 0.02 0.01 0.12 0.01 0.02 0.01 0.12 0.01 0.02 0.01 0.17 0.01 0.00 0.01 0.00 0.00 0.6 0.04 0.04 0.17 0.01 0.04 0.02 0.17 0.01 0.06 0.03 0.23 0.01 0.00 0.02 0.00 0.00 0.7 0.78 0.15 0.20 0.02 0.78 0.14 0.20 0.02 0.95 0.17 0.24 0.02 0.00 0.02 0.00 0.00 0.8 0.87 0.44 0.24 0.17 0.87 0.44 0.24 0.17 0.99 0.50 0.27 0.20 0.00 0.00 0.00 0.00 0.9 0.92 0.61 0.25 0.90 0.92 0.61 0.25 0.90 0.99 0.65 0.27 0.97 0.00 0.00 0.00 0.00 1 0.98 0.65 0.26 0.96 0.98 0.65 0.26 0.96 0.99 0.66 0.26 0.98 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 1.00 0.17 0.17 0.19 0.00 0.00 0.00 0.00 0.1 0.90 0.85 0.47 0.90 0.90 0.84 0.47 0.90 1.00 0.94 0.52 1.00 0.00 0.00 0.00 0.00 0.2 0.95 0.95 0.72 0.95 0.95 0.95 0.72 0.95 1.00 0.99 0.76 1.00 0.00 0.00 0.00 0.00 0.3 0.97 0.97 0.80 0.97 0.97 0.97 0.80 0.97 1.00 1.00 0.83 1.00 0.00 0.00 0.00 0.00 0.4 0.98 0.98 0.85 0.98 0.98 0.98 0.85 0.98 1.00 1.00 0.87 1.00 0.00 0.00 0.00 0.00   0.999 0.5 0.99 0.98 0.88 0.99 0.99 0.98 0.88 0.99 1.00 1.00 0.89 1.00 0.00 0.00 0.00 0.00 0.6 0.99 0.99 0.90 0.99 0.99 0.99 0.90 0.99 1.00 1.00 0.91 1.00 0.00 0.00 0.00 0.00 0.7 0.99 0.99 0.91 0.99 0.99 0.99 0.91 0.99 1.00 1.00 0.91 1.00 0.00 0.00 0.00 0.00 0.8 1.00 1.00 0.91 1.00 1.00 1.00 0.91 1.00 1.00 1.00 0.91 1.00 0.00 0.00 0.00 0.00 0.9 1.00 1.00 0.91 1.00 1.00 1.00 0.91 1.00 1.00 1.00 0.91 1.00 0.00 0.00 0.00 0.00 1 1.00 1.00 0.91 1.00 1.00 1.00 0.91 1.00 1.00 1.00 0.91 1.00 0.00 0.00 0.00 0.00

75

Appendix A.5.4: Power of Performance for n  70 P1 P3 P5 P1-P3

 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.11 0.02 0.17 0.13 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.03 0.06 0.19 0.17 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.17 0.00 0.13 0.13 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.19 0.00 0.25 0.21 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.13 0.02 0.27 0.21 0.00 0.00 0.00 0.00   0.2 0.5 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.14 0.00 0.26 0.19 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.02 0.18 0.11 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.01 0.15 0.09 0.00 0.00 0.00 0.00 0.8 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.04 0.03 0.21 0.15 0.00 0.00 0.00 0.00 0.9 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.13 0.01 0.20 0.13 0.00 0.00 0.00 0.00 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12 0.03 0.18 0.12 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.02 0.34 0.15 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.13 0.03 0.45 0.18 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.03 0.14 0.41 0.16 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.16 0.00 0.35 0.20 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.12 0.01 0.42 0.13 0.00 0.00 0.00 0.00   0.4 0.5 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.06 0.00 0.46 0.13 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.02 0.01 0.00 0.00 0.02 0.01 0.06 0.03 0.46 0.20 0.00 0.00 0.00 0.00 0.7 0.01 0.01 0.02 0.01 0.01 0.00 0.02 0.01 0.14 0.02 0.40 0.14 0.00 0.00 0.00 0.00 0.8 0.01 0.01 0.03 0.01 0.01 0.00 0.03 0.01 0.08 0.03 0.43 0.22 0.00 0.00 0.00 0.00 0.9 0.01 0.00 0.03 0.01 0.01 0.00 0.03 0.01 0.11 0.03 0.39 0.16 0.00 0.00 0.00 0.00 1 0.01 0.01 0.03 0.01 0.01 0.00 0.03 0.01 0.12 0.04 0.38 0.15 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.32 0.03 0.19 0.03 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.09 0.30 0.24 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.11 0.07 0.30 0.17 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.34 0.09 0.25 0.11 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.15 0.00 0.25 0.19 0.00 0.00 0.00 0.00   0.6 0.5 0.00 0.01 0.01 0.00 0.00 0.00 0.01 0.00 0.18 0.09 0.29 0.15 0.00 0.00 0.00 0.00 0.6 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.28 0.04 0.20 0.10 0.00 0.00 0.00 0.00 0.7 0.02 0.01 0.02 0.01 0.02 0.00 0.02 0.01 0.27 0.03 0.26 0.10 0.00 0.01 0.00 0.00 0.8 0.02 0.01 0.03 0.01 0.02 0.00 0.03 0.01 0.24 0.03 0.26 0.12 0.00 0.01 0.00 0.00 0.9 0.04 0.01 0.03 0.02 0.04 0.01 0.03 0.02 0.28 0.04 0.23 0.13 0.00 0.00 0.00 0.00 1 0.06 0.01 0.04 0.02 0.06 0.01 0.04 0.02 0.29 0.04 0.23 0.12 0.00 0.01 0.00 0.00

76

Appendix A.5.4 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.95 0.08 0.19 0.19 0.00 0.00 0.00 0.00 0.1 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 1.00 0.07 0.07 0.11 0.00 0.00 0.00 0.00 0.2 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.88 0.04 0.16 0.20 0.00 0.00 0.00 0.00 0.3 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.88 0.02 0.13 0.18 0.00 0.00 0.00 0.00 0.4 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.91 0.04 0.08 0.10 0.00 0.00 0.00 0.00   0.7 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.86 0.05 0.18 0.18 0.00 0.00 0.00 0.00 0.6 0.04 0.01 0.01 0.01 0.04 0.00 0.01 0.01 0.87 0.05 0.13 0.13 0.00 0.00 0.00 0.00 0.7 0.06 0.01 0.01 0.01 0.06 0.00 0.01 0.01 0.87 0.02 0.18 0.15 0.00 0.01 0.00 0.00 0.8 0.14 0.02 0.02 0.02 0.14 0.01 0.02 0.02 0.93 0.05 0.12 0.13 0.00 0.01 0.00 0.00 0.9 0.24 0.02 0.03 0.03 0.24 0.01 0.03 0.03 0.97 0.06 0.11 0.14 0.00 0.01 0.00 0.00 1 0.32 0.02 0.04 0.06 0.32 0.02 0.04 0.06 0.98 0.05 0.12 0.19 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.75 0.05 0.07 0.13 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.81 0.07 0.07 0.12 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.57 0.02 0.08 0.06 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.60 0.02 0.13 0.08 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.57 0.04 0.17 0.14 0.00 0.00 0.00 0.00   0.8 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.48 0.02 0.12 0.09 0.00 0.00 0.00 0.00 0.6 0.03 0.01 0.01 0.00 0.03 0.00 0.01 0.00 0.47 0.05 0.13 0.06 0.00 0.01 0.00 0.00 0.7 0.06 0.01 0.02 0.01 0.06 0.00 0.02 0.01 0.47 0.04 0.15 0.08 0.00 0.01 0.00 0.00 0.8 0.15 0.02 0.04 0.03 0.15 0.01 0.04 0.03 0.63 0.05 0.17 0.11 0.00 0.01 0.00 0.00 0.9 0.37 0.03 0.05 0.06 0.37 0.02 0.05 0.06 0.89 0.06 0.12 0.14 0.00 0.01 0.00 0.00 1 0.49 0.03 0.06 0.10 0.49 0.03 0.06 0.10 0.94 0.06 0.11 0.20 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.43 0.08 0.04 0.18 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.39 0.03 0.08 0.03 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.27 0.05 0.13 0.13 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.18 0.03 0.09 0.09 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.11 0.02 0.06 0.03 0.00 0.00 0.00 0.00   0.9 0.5 0.01 0.01 0.01 0.00 0.01 0.00 0.01 0.00 0.08 0.02 0.07 0.03 0.00 0.01 0.00 0.00 0.6 0.01 0.02 0.02 0.00 0.01 0.00 0.02 0.00 0.06 0.01 0.11 0.02 0.00 0.02 0.00 0.00 0.7 0.03 0.03 0.04 0.01 0.03 0.01 0.04 0.01 0.08 0.04 0.12 0.03 0.00 0.02 0.00 0.00 0.8 0.06 0.06 0.06 0.03 0.06 0.04 0.06 0.03 0.14 0.09 0.14 0.06 0.00 0.02 0.00 0.00 0.9 0.34 0.08 0.07 0.13 0.34 0.08 0.07 0.13 0.53 0.12 0.11 0.20 0.00 0.00 0.00 0.00 1 0.68 0.09 0.08 0.36 0.68 0.08 0.08 0.36 0.85 0.11 0.10 0.45 0.00 0.00 0.00 0.00

77

Appendix A.5.4 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.46 0.15 0.08 0.10 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.46 0.07 0.13 0.13 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.19 0.03 0.03 0.03 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.08 0.01 0.02 0.02 0.00 0.00 0.00 0.00 0.4 0.02 0.00 0.01 0.00 0.02 0.00 0.01 0.00 0.07 0.01 0.04 0.02 0.00 0.00 0.00 0.00   0.95 0.5 0.02 0.01 0.03 0.00 0.02 0.00 0.03 0.00 0.08 0.01 0.09 0.01 0.00 0.01 0.00 0.00 0.6 0.13 0.02 0.05 0.01 0.13 0.01 0.05 0.01 0.30 0.02 0.11 0.01 0.00 0.02 0.00 0.00 0.7 0.55 0.05 0.08 0.01 0.55 0.03 0.08 0.01 0.97 0.05 0.14 0.02 0.00 0.02 0.00 0.00 0.8 0.67 0.12 0.10 0.03 0.67 0.11 0.10 0.03 0.98 0.17 0.15 0.04 0.00 0.01 0.00 0.00 0.9 0.81 0.21 0.12 0.32 0.81 0.21 0.12 0.32 0.99 0.25 0.15 0.39 0.00 0.00 0.00 0.00 1 0.92 0.23 0.11 0.73 0.92 0.23 0.11 0.73 0.99 0.25 0.12 0.78 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.33 0.14 0.16 0.12 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.13 0.04 0.01 0.02 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.03 0.00 0.01 0.01 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.02 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.04 0.00 0.01 0.00 0.04 0.00 0.02 0.01 0.08 0.01 0.00 0.00 0.00 0.00   0.975 0.5 0.02 0.01 0.07 0.00 0.02 0.00 0.07 0.00 0.03 0.01 0.13 0.01 0.00 0.00 0.00 0.00 0.6 0.09 0.03 0.11 0.00 0.09 0.01 0.11 0.00 0.13 0.01 0.17 0.01 0.00 0.02 0.00 0.00 0.7 0.72 0.10 0.15 0.01 0.72 0.07 0.15 0.01 0.97 0.10 0.20 0.01 0.00 0.02 0.00 0.00 0.8 0.83 0.24 0.19 0.03 0.83 0.23 0.19 0.03 0.99 0.28 0.22 0.03 0.00 0.00 0.00 0.00 0.9 0.91 0.44 0.21 0.76 0.91 0.44 0.21 0.76 1.00 0.48 0.23 0.84 0.00 0.00 0.00 0.00 1 0.97 0.51 0.21 0.95 0.97 0.51 0.21 0.95 1.00 0.52 0.22 0.98 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.13 0.10 0.10 0.00 0.00 0.00 0.00 0.1 0.88 0.27 0.32 0.00 0.88 0.27 0.32 0.00 1.00 0.30 0.36 0.00 0.00 0.01 0.00 0.00 0.2 0.94 0.91 0.62 0.93 0.94 0.90 0.62 0.93 1.00 0.97 0.66 1.00 0.00 0.00 0.00 0.00 0.3 0.95 0.94 0.72 0.95 0.95 0.94 0.72 0.95 1.00 0.99 0.75 1.00 0.00 0.00 0.00 0.00 0.4 0.97 0.96 0.78 0.97 0.97 0.96 0.78 0.97 1.00 1.00 0.81 1.00 0.00 0.00 0.00 0.00   0.999 0.5 0.97 0.97 0.83 0.97 0.97 0.97 0.83 0.97 1.00 1.00 0.85 1.00 0.00 0.00 0.00 0.00 0.6 0.98 0.98 0.84 0.98 0.98 0.98 0.84 0.98 1.00 1.00 0.85 1.00 0.00 0.00 0.00 0.00 0.7 0.99 0.99 0.86 0.99 0.99 0.99 0.86 0.99 1.00 1.00 0.86 1.00 0.00 0.00 0.00 0.00 0.8 0.99 0.99 0.86 0.99 0.99 0.99 0.86 0.99 1.00 1.00 0.87 1.00 0.00 0.00 0.00 0.00 0.9 1.00 1.00 0.88 1.00 1.00 1.00 0.88 1.00 1.00 1.00 0.88 1.00 0.00 0.00 0.00 0.00 1 1.00 1.00 0.89 1.00 1.00 1.00 0.89 1.00 1.00 1.00 0.89 1.00 0.00 0.00 0.00 0.00

78

Appendix A.5.5: Power of Performance for n 100 P1 P3 P5 P1-P3

 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.04 0.00 1.00 0.13 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.18 0.00 1.00 0.03 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.25 0.00 1.00 0.06 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.10 0.00 1.00 0.19 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.08 0.00 1.00 0.13 0.00 0.00 0.00 0.00   0.2 0.5 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.20 0.00 1.00 0.15 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.12 0.00 1.00 0.12 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.08 0.03 1.00 0.13 0.00 0.00 0.00 0.00 0.8 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.09 0.01 1.00 0.13 0.00 0.00 0.00 0.00 0.9 0.00 0.00 0.03 0.00 0.00 0.00 0.03 0.00 0.04 0.01 1.00 0.12 0.00 0.00 0.00 0.00 1 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.12 0.01 1.00 0.10 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.07 0.07 0.93 0.19 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.18 0.00 0.82 0.07 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.18 0.00 0.82 0.07 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.22 0.00 0.78 0.08 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.14 0.02 0.86 0.14 0.00 0.00 0.00 0.00   0.4 0.5 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.14 0.02 0.88 0.13 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.11 0.01 0.89 0.09 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.08 0.00 0.93 0.12 0.00 0.00 0.00 0.00 0.8 0.01 0.00 0.03 0.00 0.01 0.00 0.03 0.00 0.17 0.04 0.85 0.11 0.00 0.00 0.00 0.00 0.9 0.01 0.01 0.05 0.01 0.01 0.00 0.05 0.01 0.14 0.04 0.88 0.14 0.00 0.00 0.00 0.00 1 0.01 0.01 0.05 0.01 0.01 0.00 0.05 0.01 0.15 0.01 0.87 0.09 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.16 0.22 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.13 0.17 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.04 0.08 0.04 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.08 0.17 0.00 0.00 0.00 0.00 0.4 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 1.00 0.00 0.08 0.13 0.00 0.00 0.00 0.00   0.6 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 1.00 0.04 0.09 0.18 0.00 0.00 0.00 0.00 0.6 0.03 0.00 0.00 0.01 0.03 0.00 0.00 0.01 1.00 0.01 0.12 0.21 0.00 0.00 0.00 0.00 0.7 0.04 0.01 0.01 0.01 0.04 0.00 0.01 0.01 1.00 0.02 0.12 0.18 0.00 0.01 0.00 0.00 0.8 0.07 0.01 0.01 0.01 0.07 0.00 0.01 0.01 1.00 0.02 0.12 0.21 0.00 0.01 0.00 0.00 0.9 0.12 0.01 0.01 0.02 0.12 0.00 0.01 0.02 1.00 0.02 0.09 0.18 0.00 0.00 0.00 0.00 1 0.15 0.01 0.01 0.03 0.15 0.00 0.01 0.03 1.00 0.02 0.08 0.17 0.00 0.01 0.00 0.00

79

Appendix A.5.5 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.09 0.17 0.09 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.07 0.07 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.03 0.17 0.20 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.97 0.00 0.11 0.11 0.00 0.00 0.00 0.00 0.4 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.96 0.06 0.24 0.20 0.00 0.00 0.00 0.00   0.7 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.99 0.09 0.14 0.14 0.00 0.00 0.00 0.00 0.6 0.03 0.01 0.01 0.00 0.03 0.00 0.01 0.00 0.94 0.02 0.17 0.09 0.00 0.01 0.00 0.00 0.7 0.04 0.01 0.01 0.00 0.04 0.00 0.01 0.00 0.92 0.07 0.16 0.07 0.00 0.01 0.00 0.00 0.8 0.10 0.01 0.02 0.01 0.10 0.01 0.02 0.01 0.98 0.05 0.16 0.09 0.00 0.01 0.00 0.00 0.9 0.18 0.01 0.02 0.02 0.18 0.01 0.02 0.02 1.00 0.04 0.12 0.09 0.00 0.01 0.00 0.00 1 0.24 0.01 0.04 0.03 0.24 0.01 0.04 0.03 1.00 0.03 0.15 0.14 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.13 0.00 0.17 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.97 0.06 0.09 0.27 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.87 0.00 0.15 0.31 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.90 0.10 0.05 0.22 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.78 0.04 0.09 0.18 0.00 0.00 0.00 0.00   0.8 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.74 0.03 0.08 0.16 0.00 0.00 0.00 0.00 0.6 0.03 0.01 0.00 0.01 0.03 0.00 0.00 0.01 0.75 0.03 0.10 0.16 0.00 0.01 0.00 0.00 0.7 0.06 0.01 0.01 0.01 0.06 0.00 0.01 0.01 0.80 0.04 0.07 0.15 0.00 0.01 0.00 0.00 0.8 0.13 0.02 0.01 0.03 0.13 0.01 0.01 0.03 0.85 0.04 0.09 0.17 0.00 0.02 0.00 0.00 0.9 0.30 0.02 0.03 0.07 0.30 0.01 0.03 0.07 0.97 0.05 0.09 0.24 0.00 0.01 0.00 0.00 1 0.44 0.02 0.04 0.14 0.44 0.02 0.04 0.14 0.99 0.03 0.08 0.31 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.57 0.10 0.17 0.20 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.54 0.05 0.14 0.22 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.50 0.06 0.24 0.26 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.47 0.04 0.13 0.21 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.32 0.06 0.14 0.13 0.00 0.00 0.00 0.00   0.9 0.5 0.01 0.01 0.01 0.00 0.01 0.00 0.01 0.00 0.21 0.02 0.08 0.08 0.00 0.00 0.00 0.00 0.6 0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01 0.16 0.02 0.11 0.07 0.00 0.01 0.00 0.00 0.7 0.03 0.02 0.03 0.01 0.03 0.01 0.03 0.01 0.17 0.03 0.14 0.04 0.00 0.02 0.00 0.00 0.8 0.11 0.04 0.05 0.02 0.11 0.02 0.05 0.02 0.32 0.07 0.14 0.05 0.00 0.02 0.00 0.00 0.9 0.51 0.06 0.06 0.12 0.51 0.06 0.06 0.12 0.91 0.10 0.11 0.21 0.00 0.00 0.00 0.00 1 0.71 0.07 0.08 0.35 0.71 0.07 0.08 0.35 0.96 0.09 0.11 0.47 0.00 0.00 0.00 0.00

81

Appendix A.5.5 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.00 0.04 0.08 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.27 0.06 0.09 0.06 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.14 0.07 0.04 0.04 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.04 0.02 0.03 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04 0.02 0.03 0.02 0.00 0.00 0.00 0.00   0.95 0.5 0.01 0.01 0.02 0.00 0.01 0.00 0.02 0.00 0.03 0.01 0.08 0.01 0.00 0.00 0.00 0.00 0.6 0.01 0.02 0.03 0.00 0.01 0.00 0.03 0.00 0.03 0.01 0.10 0.01 0.00 0.01 0.00 0.00 0.7 0.02 0.04 0.06 0.01 0.02 0.01 0.06 0.01 0.04 0.02 0.13 0.02 0.00 0.03 0.00 0.00 0.8 0.05 0.08 0.08 0.01 0.05 0.07 0.08 0.01 0.08 0.12 0.14 0.02 0.00 0.01 0.00 0.00 0.9 0.61 0.11 0.10 0.11 0.61 0.11 0.10 0.11 0.82 0.15 0.13 0.14 0.00 0.00 0.00 0.00 1 0.86 0.14 0.10 0.54 0.86 0.14 0.10 0.54 0.95 0.15 0.11 0.60 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.18 0.03 0.09 0.12 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.11 0.05 0.03 0.08 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.01 0.02 0.02 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.01 0.01 0.01 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.04 0.00 0.00 0.00 0.00 0.00   0.975 0.5 0.00 0.01 0.04 0.00 0.00 0.00 0.04 0.00 0.01 0.01 0.09 0.01 0.00 0.00 0.00 0.00 0.6 0.00 0.02 0.08 0.00 0.00 0.00 0.08 0.00 0.01 0.01 0.14 0.01 0.00 0.02 0.00 0.00 0.7 0.01 0.06 0.12 0.00 0.01 0.04 0.12 0.00 0.01 0.06 0.18 0.01 0.00 0.02 0.00 0.00 0.8 0.02 0.15 0.15 0.03 0.02 0.14 0.15 0.03 0.03 0.18 0.20 0.03 0.00 0.00 0.00 0.00 0.9 0.59 0.29 0.18 0.60 0.59 0.29 0.18 0.60 0.68 0.33 0.20 0.69 0.00 0.00 0.00 0.00 1 0.94 0.33 0.17 0.94 0.94 0.33 0.17 0.94 0.97 0.34 0.18 0.97 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A 0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.07 0.03 0.10 0.00 0.00 0.00 0.00 0.1 0.82 0.00 0.16 0.00 0.82 0.00 0.16 0.00 1.00 0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.2 0.91 0.75 0.48 0.91 0.91 0.75 0.48 0.91 1.00 0.82 0.52 0.99 0.00 0.00 0.00 0.00 0.3 0.94 0.91 0.62 0.94 0.94 0.91 0.62 0.94 1.00 0.97 0.66 1.00 0.00 0.00 0.00 0.00 0.4 0.96 0.94 0.70 0.96 0.96 0.94 0.70 0.96 1.00 0.98 0.74 1.00 0.00 0.00 0.00 0.00   0.999 0.5 0.97 0.96 0.76 0.97 0.97 0.96 0.76 0.97 1.00 0.99 0.78 1.00 0.00 0.00 0.00 0.00 0.6 0.98 0.97 0.80 0.98 0.98 0.97 0.80 0.98 1.00 0.99 0.81 1.00 0.00 0.00 0.00 0.00 0.7 0.99 0.98 0.80 0.99 0.99 0.98 0.80 0.99 1.00 1.00 0.81 1.00 0.00 0.00 0.00 0.00 0.8 0.99 0.99 0.82 0.99 0.99 0.99 0.82 0.99 1.00 1.00 0.82 1.00 0.00 0.00 0.00 0.00 0.9 0.99 0.99 0.83 0.99 0.99 0.99 0.83 0.99 1.00 1.00 0.83 1.00 0.00 0.00 0.00 0.00 1 1.00 1.00 0.82 1.00 1.00 1.00 0.82 1.00 1.00 1.00 0.82 1.00 0.00 0.00 0.00 0.00

81

Appendix A.5.6: Power of Performance for n 150 P1 P3 P5 P1-P3

 C D M A C D M A C D M A C D M A 0 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.13 0.00 1.00 0.19 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.44 0.00 1.00 0.06 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.08 0.00 1.00 0.08 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.05 0.00 1.00 0.14 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.23 0.00 1.00 0.13 0.00 0.00 0.00 0.00   0.2 0.5 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.08 0.00 1.00 0.17 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.20 0.03 1.00 0.10 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.22 0.00 1.00 0.07 0.00 0.00 0.00 0.00 0.8 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.18 0.03 1.00 0.13 0.00 0.00 0.00 0.00 0.9 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.21 0.00 1.00 0.10 0.00 0.00 0.00 0.00 1 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.16 0.00 1.00 0.23 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A

0 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.94 0.00 0.88 0.06 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.92 0.00 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 1.00 0.00 0.82 0.00 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 1.00 0.00 0.96 0.16 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 1.00 0.03 0.94 0.03 0.00 0.00 0.00 0.00   0.4 0.5 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.97 0.00 0.85 0.12 0.00 0.00 0.00 0.00 0.6 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.97 0.00 0.92 0.10 0.00 0.00 0.00 0.00 0.7 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.98 0.02 0.93 0.05 0.00 0.00 0.00 0.00 0.8 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.99 0.04 0.90 0.11 0.00 0.00 0.00 0.00 0.9 0.04 0.00 0.03 0.00 0.04 0.00 0.03 0.00 0.98 0.00 0.87 0.06 0.00 0.00 0.00 0.00 1 0.04 0.00 0.04 0.00 0.04 0.00 0.04 0.00 0.99 0.02 0.91 0.08 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A

0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.79 0.07 0.00 0.00 0.00 0.00 0.1 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.05 0.86 0.05 0.00 0.00 0.00 0.00 0.2 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.89 0.18 0.00 0.00 0.00 0.00 0.3 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.04 0.09 0.78 0.22 0.00 0.00 0.00 0.00 0.4 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.91 0.23 0.00 0.00 0.00 0.00   0.6 0.5 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.04 0.87 0.11 0.00 0.00 0.00 0.00 0.6 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.02 0.00 0.93 0.17 0.00 0.00 0.00 0.00 0.7 0.00 0.00 0.02 0.00 0.00 0.00 0.02 0.00 0.03 0.04 0.83 0.09 0.00 0.00 0.00 0.00 0.8 0.00 0.01 0.04 0.01 0.00 0.00 0.04 0.01 0.02 0.02 0.79 0.10 0.00 0.00 0.00 0.00 0.9 0.00 0.00 0.06 0.01 0.00 0.00 0.06 0.01 0.00 0.02 0.80 0.14 0.00 0.00 0.00 0.00 1 0.00 0.00 0.08 0.01 0.00 0.00 0.08 0.01 0.02 0.02 0.83 0.08 0.00 0.00 0.00 0.00

82

Appendix A.5.6 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A

0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.06 0.35 0.41 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.06 0.13 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.04 0.15 0.08 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.04 0.21 0.18 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.23 0.14 0.00 0.00 0.00 0.00   0.7 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 1.00 0.04 0.27 0.20 0.00 0.00 0.00 0.00 0.6 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.27 0.24 0.00 0.00 0.00 0.00 0.7 0.03 0.01 0.01 0.01 0.03 0.00 0.01 0.01 1.00 0.06 0.29 0.19 0.00 0.01 0.00 0.00 0.8 0.07 0.01 0.01 0.01 0.07 0.00 0.01 0.01 1.00 0.02 0.15 0.14 0.00 0.01 0.00 0.00 0.9 0.12 0.01 0.02 0.02 0.12 0.00 0.02 0.02 1.00 0.03 0.19 0.19 0.00 0.01 0.00 0.00 1 0.18 0.01 0.04 0.04 0.18 0.01 0.04 0.04 1.00 0.03 0.20 0.20 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A

0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.05 0.15 0.20 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.10 0.05 0.10 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.00 0.04 0.09 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.04 0.17 0.13 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.03 0.14 0.22 0.00 0.00 0.00 0.00   0.8 0.5 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.02 0.21 0.23 0.00 0.00 0.00 0.00 0.6 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 1.00 0.02 0.17 0.21 0.00 0.00 0.00 0.00 0.7 0.04 0.01 0.00 0.01 0.04 0.00 0.00 0.01 0.99 0.06 0.10 0.18 0.00 0.01 0.00 0.00 0.8 0.08 0.01 0.01 0.02 0.08 0.00 0.01 0.02 1.00 0.06 0.16 0.21 0.00 0.01 0.00 0.00 0.9 0.22 0.02 0.03 0.05 0.22 0.01 0.03 0.05 1.00 0.05 0.15 0.23 0.00 0.01 0.00 0.00 1 0.35 0.02 0.05 0.10 0.35 0.01 0.05 0.10 1.00 0.04 0.13 0.29 0.00 0.01 0.00 0.00 C D M A C D M A C D M A C D M A

0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.06 0.06 0.24 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.86 0.05 0.18 0.27 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.85 0.03 0.21 0.35 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.91 0.13 0.09 0.35 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.74 0.05 0.16 0.08 0.00 0.00 0.00 0.00   0.9 0.5 0.02 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.69 0.05 0.11 0.14 0.00 0.00 0.00 0.00 0.6 0.03 0.01 0.00 0.00 0.03 0.00 0.00 0.00 0.60 0.01 0.09 0.06 0.00 0.01 0.00 0.00 0.7 0.09 0.02 0.01 0.01 0.09 0.00 0.01 0.01 0.95 0.01 0.15 0.07 0.00 0.02 0.00 0.00 0.8 0.20 0.02 0.02 0.02 0.20 0.01 0.02 0.02 1.00 0.04 0.12 0.08 0.00 0.01 0.00 0.00 0.9 0.44 0.04 0.04 0.08 0.44 0.03 0.04 0.08 1.00 0.07 0.10 0.19 0.00 0.00 0.00 0.00 1 0.66 0.03 0.06 0.28 0.66 0.03 0.06 0.28 1.00 0.04 0.09 0.43 0.00 0.00 0.00 0.00

83

Appendix A.5.6 (cont.) P1 P3 P5 P1-P3  C D M A C D M A C D M A C D M A

0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.69 0.13 0.06 0.06 0.00 0.00 0.00 0.00 0.1 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.94 0.12 0.18 0.41 0.00 0.00 0.00 0.00 0.2 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.73 0.05 0.09 0.18 0.00 0.00 0.00 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.35 0.06 0.04 0.11 0.00 0.00 0.00 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.19 0.03 0.09 0.04 0.00 0.00 0.00 0.00   0.95 0.5 0.02 0.00 0.01 0.00 0.02 0.00 0.01 0.00 0.21 0.01 0.09 0.03 0.00 0.00 0.00 0.00 0.6 0.03 0.02 0.02 0.00 0.03 0.00 0.02 0.00 0.17 0.01 0.11 0.02 0.00 0.02 0.00 0.00 0.7 0.28 0.04 0.04 0.00 0.28 0.01 0.04 0.00 0.95 0.03 0.14 0.02 0.00 0.03 0.00 0.00 0.8 0.44 0.06 0.07 0.01 0.44 0.05 0.07 0.01 0.99 0.11 0.15 0.02 0.00 0.02 0.00 0.00 0.9 0.65 0.09 0.08 0.08 0.65 0.08 0.08 0.08 1.00 0.13 0.12 0.13 0.00 0.00 0.00 0.00 1 0.86 0.10 0.09 0.56 0.86 0.10 0.09 0.56 1.00 0.11 0.11 0.65 0.00 0.00 0.00 0.00 C D M A C D M A C D M A C D M A

0 0.00 0.00 0.00 0.00 0.00 0.00 0.06 0.00 0.50 0.06 0.06 0.06 0.00 0.00 0.06 0.00 0.1 0.00 0.00 0.00 0.00 0.00 0.00 0.03 0.00 0.45 0.03 0.03 0.03 0.00 0.00 0.03 0.00 0.2 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.00 0.17 0.02 0.02 0.00 0.00 0.00 0.02 0.00 0.3 0.01 0.00 0.00 0.00 0.01 0.00 0.01 0.00 0.06 0.01 0.01 0.01 0.00 0.00 0.01 0.00 0.4 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00   0.975 0.5 0.01 0.01 0.01 0.00 0.01 0.00 0.00 0.00 0.03 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.6 0.03 0.02 0.02 0.00 0.03 0.00 0.00 0.00 0.06 0.00 0.00 0.01 0.00 0.02 0.01 0.00 0.7 0.51 0.05 0.05 0.01 0.51 0.02 0.03 0.01 0.95 0.03 0.03 0.02 0.00 0.03 0.01 0.00 0.8 0.66 0.12 0.12 0.02 0.66 0.11 0.16 0.02 0.99 0.16 0.16 0.03 0.00 0.01 0.05 0.00 0.9 0.80 0.20 0.20 0.24 0.80 0.20 0.24 0.24 1.00 0.24 0.24 0.30 0.00 0.00 0.05 0.00 1 0.95 0.24 0.24 0.90 0.95 0.24 0.25 0.90 1.00 0.25 0.25 0.94 0.00 0.00 0.01 0.00 C D M A C D M A C D M A C D M A

0 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 1.00 0.05 0.16 0.05 0.00 0.00 0.00 0.00 0.1 0.75 0.00 0.06 0.00 0.75 0.00 0.06 0.00 1.00 0.00 0.08 0.00 0.00 0.00 0.00 0.00 0.2 0.86 0.00 0.36 0.00 0.86 0.00 0.36 0.00 1.00 0.00 0.42 0.00 0.00 0.00 0.00 0.00 0.3 0.91 0.77 0.52 0.90 0.91 0.76 0.52 0.90 1.00 0.84 0.57 1.00 0.00 0.00 0.00 0.00 0.4 0.92 0.88 0.61 0.92 0.92 0.88 0.61 0.92 1.00 0.95 0.66 1.00 0.00 0.00 0.00 0.00   0.999 0.5 0.95 0.93 0.67 0.95 0.95 0.93 0.67 0.95 1.00 0.97 0.70 1.00 0.00 0.00 0.00 0.00 0.6 0.97 0.95 0.71 0.97 0.97 0.95 0.71 0.97 1.00 0.98 0.74 1.00 0.00 0.00 0.00 0.00 0.7 0.97 0.96 0.74 0.97 0.97 0.96 0.74 0.97 1.00 0.99 0.76 1.00 0.00 0.00 0.00 0.00 0.8 0.98 0.97 0.77 0.98 0.98 0.97 0.77 0.98 1.00 0.99 0.78 1.00 0.00 0.00 0.00 0.00 0.9 0.99 0.98 0.76 0.99 0.99 0.98 0.76 0.99 1.00 0.99 0.77 1.00 0.00 0.00 0.00 0.00 1 1.00 0.99 0.77 1.00 1.00 0.99 0.77 1.00 1.00 0.99 0.77 1.00 0.00 0.00 0.00 0.00

84

81