
A Branch and Cut Algorithm for the Halfspace Depth Problem by Dan Chen Bachelor of Science, Liaoning University, 2003 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science in the Graduate Academic Unit of Computer Science Supervisor: David Bremner, PhD, Computer Science Examining Board: Eric Aubanel, PhD, Computer Science, Chair Patricia Evans, PhD, Computer Science, Luis Zuluaga, PhD, Business Administration This thesis is accepted by the Dean of Graduate Studies THE UNIVERSITY OF NEW BRUNSWICK arXiv:0705.1956v1 [cs.CG] 14 May 2007 February, 2007 c Dan Chen, 2007 Abstract The concept of data depth in non-parametric multivariate descriptive statis- tics is the generalization of the univariate rank method to multivariate data. Halfspace depth is a measure of data depth. Given a set S of points and a point p, the halfspace depth (or rank) k of p is defined as the minimum num- ber of points of S contained in any closed halfspace with p on its boundary. Computing halfspace depth is NP-hard, and it is equivalent to the Maximum Feasible Subsystem problem. In this thesis a mixed integer program is formu- lated with the big-M method for the halfspace depth problem. We suggest a branch and cut algorithm. In this algorithm, Chinneck's heuristic algorithm is used to find an upper bound and a related technique based on sensitivity analysis is used for branching. Irreducible Infeasible Subsystem (IIS) hitting set cuts are applied. We also suggest a binary search algorithm which may be more stable numerically. The algorithms are implemented with the BCP framework from the COIN-OR project. ii Table of Contents Abstract ii Table of Contents v List of Tables vi Abbreviations viii List of Figures viii 1 The Halfspace Depth Problem 1 1.1 Data Depth . .1 1.2 Halfspace Depth . .2 1.2.1 Properties of Halfspace Depth . .5 1.3 Overview of This Thesis . .6 2 Maximum Feasible Subsystem 8 2.1 Introduction . .8 2.2 Irreducible Infeasible Subsystems . .9 3 Mixed Integer Program (MIP) Formulation 13 3.1 The Infeasible System . 13 3.2 Parker's Formulation . 14 3.3 An MIP with the Big-M Method . 15 3.4 An Alternative MIP . 18 4 A Heuristic Algorithm 25 4.1 Elastic Programming . 25 4.2 The Heuristic Algorithm . 27 iii 5 The Branch and Cut Paradigm 30 5.1 The Branch and Bound Method . 30 5.1.1 Introduction . 30 5.1.2 The Branch and Bound Method . 31 5.1.3 Strategies in the Branch and Bound Method . 37 5.1.4 An Algorithm Prototype . 40 5.1.5 More about Branch and Bound . 41 5.2 The Cutting Plane Method . 42 5.3 The Branch and Cut Method . 44 5.3.1 Parallel Branch and Cut . 44 6 The Branch and Cut Algorithm 46 6.1 The Algorithm . 46 6.2 Special Techniques in This Algorithm . 49 6.3 A Binary Search Idea . 52 6.3.1 New MIP Formulation . 52 6.3.2 The Binary Search Algorithm . 53 7 Implementation 55 7.1 BCP . 55 7.1.1 Structure of BCP . 56 7.1.2 Parallelization . 58 7.2 Implementation Details . 58 7.2.1 The MPS File Generator . 58 7.2.2 The Cut Generators . 59 7.2.3 The Tree Manager Process . 61 7.2.4 The Linear Programming Process . 62 7.2.5 Parameters . 65 8 Computational Experiments 66 8.1 Numerical Issues . 66 8.2 Results for Random Generated Data . 67 8.2.1 Comparing Branching Rules and Tree Search Strategies 67 8.2.2 Comparing Cut Generators . 68 8.2.3 Comparing Algorithms . 71 8.2.4 Parallel Execution . 73 8.3 Results for ANOVA Data . 75 iv 9 Conclusions 78 9.1 Summary of the Work . 78 9.2 Open Problems and Future Work . 79 Bibliography 81 A Testing Results 86 A.1 Results of the Branch and Bound algorithm . 87 A.1.1 Results of the Greedy Branching and MDS Cut Gen- erator . 87 A.1.2 Results of the Strong Branching and MDS Cut Gener- ator and Depth First Search . 91 A.1.3 Results of the Strong Branching and MDS Cut Gener- ator and Best First Search . 95 A.1.4 Results of the Strong Branching and BIS Cut Generator 99 A.2 Results of the Binary Search Algorithm . 103 A.3 Results of the Primal-Dual Algorithm . 107 Curriculum Vita v List of Tables 8.1 Performance with different integer program formulations . 77 A.1 Abbreviations used in this chapter . 86 A.2 Data sets used for the tests . 86 A.3 Performance with the greedy branching rule, data set: d5 . 87 A.4 Performance with the greedy branching rule, data set: d10 . 88 A.5 Performance with the greedy branching rule, data set: n50 . 89 A.6 Performance with the greedy branching rule, data set: sd5 . 90 A.7 Performance with strong branching, data set: d5 . 91 A.8 Performance with strong branching, data set: d10 . 92 A.9 Performance with strong branching, data set: n50 . 93 A.10 Performance with strong branching, data set: sd5 . 94 A.11 Performance with best first tree search strategy, data set: d5 . 95 A.12 Performance with best first tree search strategy, data set: d10 96 A.13 Performance with best first tree search strategy, data set: n50 97 A.14 Performance with best first tree search strategy, data set: sd5 98 A.15 Performance with BIS cut generator, data set: d5 . 99 A.16 Performance with BIS cut generator, data set: d10 . 100 A.17 Performance with BIS cut generator, data set: n50 . 101 A.18 Performance with BIS cut generator, data set: sd5 . 102 A.19 Performance of the binary search, data set: d5 . 103 A.20 Performance of the binary search, data set: d10 . 104 A.21 Performance of the binary search, data set: n50 . 105 A.22 Performance of the binary search, data set: sd5 . 106 A.23 Performance of the primal-dual algorithm, data set: d5 . 107 A.24 Performance of the primal-dual algorithm, data set: d10 . 108 A.25 Performance of the primal-dual algorithm, data set: n50 . 109 A.26 Performance of the primal-dual algorithm, data set: sd5 . 110 vi List of Figures 1.1 An example of halfspace depth in R2 ..............3 1.2 Another example of halfspace depth in R2 ...........3 1.3 Halfspace depth contours . .6 2.1 An MDS in R2 .......................... 10 2.2 A degenerate MDS in R2 ..................... 11 3.1 Halfspace depth defined by an open halfspace . 20 3.2 Halfspace depth defined by a cone . 20 3.3 Intersections of the arcs . 22 3.4 Lattice . 24 5.1 Before branching . 33 5.2 After branching . 33 5.3 An example of fathom . 36 5.4 An example of high lower bound . 36 5.5 An example of search tree . 38 5.6 An example of a cutting plane . 43 5.7 An example of cutting planes intersecting on the optimal solution 43 7.1 The Tree Manager Process . 62 7.2 The Linear Programming Process . 64 8.1 Comparison of different branching rules . 69 8.2 Comparison of different branching rules . 70 8.3 Comparison of different cutting plane generators . 71 8.4 Comparison of different cutting plane generators . 72 8.5 Comparison of different algorithms . 73 8.6 Comparison of different algorithms . 74 8.7 The Performance of Parallel Execution . 75 vii List of Abbreviations ANOVA Analysis of Variance BCP Branch-Cut-Price BIS Basic Infeasible Subsystem IIS Irreducible Infeasible Subsystem LP Linear Programming MAX FS Maximum Feasible Subsystem MDS Minimal Dominating Set MIN IIS COVER Minimum-Cardinality IIS Set-Covering MIN ULR Minimum Unsatisfied Linear Relation MIP Mixed Integer Program MPS Mathematical Programming System NINF Number of Infeasibility SINF Sum of Infeasibility viii Chapter 1 The Halfspace Depth Problem 1.1 Data Depth Halfspace depth is a measure of data depth. The term data depth comes from non-parametric multivariate descriptive statistics. Descriptive statistics is used to summarize a collection of data, for example by estimating the center of the data set. In non-parametric statistics, the probability distribution of the population is not considered, and the test statistics are usually based on the rank of the data. In multivariate data analysis, every data item consists of several elements (i.e. is an n-tuple). The idea of data depth in multivariate data analysis is to generalize the univariate rank method to tuple data, and order the data in a center-outward fashion. Since the tuple data items can be represented as points in Euclidean space Rd, these two terms are used interchangeably in this thesis. The rank or depth of a point 1 measures the centrality of this point with respect to a given set of points in high dimensional space. The data with the highest rank is considered the center or median of the data set, which best describes the data set. In R1, the median holds the properties of high breakdown point, affine equivariance, and monotonicity. Breakdown point of a measure is the fraction of the input that must be moved to infinity before the median moves to 1 1 infinity. In R , the median has a breakdown point of 2 [3]. After an affine transformation on the data set, the median will not be changed. Therefore, median is affine equivariant. When extra data is added to one side of the data set, the median tends to move to that side, never moving to the opposite side.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages120 Page
-
File Size-