New Heuristic And Approaches Applied To The Multiple-choice Multidimensional Knapsack Problem

A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy

by

Chaitr S. Hiremath M.S., Wright State University, 2004

2008 Wright State University COPYRIGHT BY Chaitr S. Hiremath 2008 WRIGHT STATE UNIVERSITY SCHOOL OF GRADUATE STUDIES

February 23, 2008

I HEREBY RECOMMEND THAT THE DISSERTATION PREPARED UNDER MY SUPERVISION BY Chaitr S. Hiremath ENTITLED New Heuristic And Metaheuristic Approaches Applied To The Multiple-choice Multidimensional Knapsack Problem BE AC- CEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DE- GREE OF Doctor of Philosophy.

Raymond R. Hill, Ph.D. Dissertation Director

Ramana Grandhi, Ph.D. Director, Engineering Ph.D. Program

Joseph F. Thomas, Jr. , Ph.D. Dean, School of Graduate Studies

Committee on Final Examination

Raymond R. Hill, Ph.D.

James T. Moore, Ph.D.

Xinhui Zhang, Ph.D.

Gary Kinney, Ph.D.

Mateen Rizki, Ph.D. ABSTRACT

Hiremath, Chaitr . Ph.D., Department of Biomedical, Industrial and Human Factors Engineer- ing, Wright State University, 2008. New Heuristic And Metaheuristic Approaches Applied To The Multiple-choice Multidimensional Knapsack Problem.

The knapsack problem has been used to model various decision making processes. Industrial appli- cations find the need for satisfying additional constraints and these necessities lead to the variants and extensions of knapsack problems which are complex to solve. Heuristic algorithms have been developed by many researchers to solve the variants of knapsack problems. Empirical analysis has been done to compare the performance of these heuristics. Little research has been done to find out why certain algorithms perform well on certain test problems while not so well on other test problems. There has been little work done to gain knowledge of the test problem characteristics and their effects on algorithm performance.

The research focuses on the Multiple-choice Multidimensional Knapsack Problem (MMKP), a complex variant of the knapsack problem. The objectives of the research are fourfold. The first objective is to show how empirical science can lead to theory. The research involves the empirical analysis of current heuristics with respect to problem structure especially correlation and constraint slackness settings. The second objective is to consider the performance traits of heuristic procedures and develop a more diverse set of MMKP test problems considering problem charac- teristics like the number of variables, number of constraints, constraint correlation, and constraint right-hand side capacities. The third objective is the development of new heuristic approaches for solving the MMKP. This involves examining the existing heuristics against our new test set and using the analysis of the results to help in the development of new heuristic approaches. The fourth objective is to develop improved metaheuristic procedures for the MMKP using the improved heuristic approaches to initialize searches or to improve local search neighborhoods.

iii Contents

1 Introduction 1 1.1 Discussion of Knapsack Problems ...... 1 1.2 Overview of the Dissertation Research ...... 2 1.3 Contributions of the Dissertation Research ...... 3

2 Multi-Dimensional Knapsack Problems 5 2.1 Introduction ...... 5 2.2 Branch-and-Bound Approach ...... 6 2.3 Dynamic Programming ...... 10 2.4 Greedy Heuristics ...... 13 2.5 Transformation Heuristics ...... 16 2.6 Metaheuristic Approaches ...... 18 2.6.1 Tabu Search (TS) ...... 19 2.6.2 (GA) ...... 22 2.6.3 (SA) ...... 28

3 Extensions and Variants of the Knapsack Problem involving the notion of sets 30 3.1 Introduction ...... 30 3.2 Multiple Knapsack Problems (MKP) ...... 31 3.2.1 MKP Formulation ...... 31 3.2.2 Heuristic Solution Approaches for MKP ...... 31 3.3 Multiple Choice Knapsack Problems (MCKP) ...... 33 3.3.1 MCKP Formulation ...... 33 3.3.2 Heuristic Solution Approaches for MCKP ...... 34 3.4 Multiple-choice Multi-dimensional Knapsack Problems (MMKP) ...... 36 3.4.1 MMKP Formulation ...... 36 3.4.2 Heuristic Solution Approaches for MMKP ...... 37 3.5 Applications and Formulations of the MMKP-type problems ...... 43

4 Legacy Heuristics and Test Problems Analysis 50 4.1 Introduction ...... 50 4.2 Legacy Heuristics ...... 51 4.2.1 Legacy Heuristics for Multiple Knapsack Problems (MKP) ...... 51 4.2.2 Legacy Heuristics for Multiple Choice Knapsack Problems (MCKP) .... 55

iv 4.2.3 Legacy Heuristics for Multiple-choice Multi-dimensional Knapsack Prob- lems (MMKP) ...... 58 4.3 Test Problem Analysis ...... 69 4.3.1 Test Problem Analysis for Multiple Knapsack Problems (MKP) ...... 69 4.3.2 Test Problems for Multiple Choice Knapsack Problems (MCKP) ...... 71 4.3.3 Test Problems for Multiple-choice Multi-dimensional Knapsack Problems (MMKP) ...... 74 4.4 Problem Structure Analysis of Test Problems ...... 76 4.4.1 Structure of MDKP Test Problems ...... 76 4.4.2 Structure of MKP Test Problems ...... 78 4.4.3 Structure of MCKP Test Problems ...... 81 4.4.4 Structure of MMKP Test Problems ...... 82 4.5 Summary ...... 84

5 Empirical Analyses of Legacy MMKP Heuristics and Test Problem Generation 90 5.1 Introduction ...... 90 5.2 Problem Generation and Problem Characteristics for MMKP ...... 91 5.2.1 Standard MMKP Test Problem Generation ...... 92 5.2.2 Analytical MMKP Test Problem Generation ...... 93 5.2.3 Competitive MMKP Test Problem Generation ...... 95 5.2.4 Analytical MMKP Test Sets Versus Available MMKP Test Set ...... 97 5.3 Empirical Analyses of MMKP Heuristics on Available Test Problems ...... 102 5.4 Empirical Analyses of MMKP Heuristics on New MMKP Test Problem Set .... 109 5.4.1 Analyses based on Constraint Right-Hand Side Setting ...... 109 5.4.2 Analyses based on Correlation Structure ...... 120 5.5 Summary ...... 125

6 New Greedy Heuristics for the MMKP 127 6.1 Introduction ...... 127 6.2 A TYPE-based Heuristic for the MMKP ...... 127 6.3 New Greedy Heuristic Version 1 (CH1) ...... 130 6.3.1 NG V3 Heuristic (Cho 2005) ...... 130 6.3.2 CH1 Implementation ...... 133 6.3.3 Empirical Tests for the CH1 Implementation ...... 134 6.4 New Greedy Heuristic Version 2 (CH2) ...... 143 6.4.1 CH2 Implementation ...... 143 6.4.2 Empirical Tests for the CH2 Implementation ...... 146 6.5 Summary ...... 152

7 Metaheuristic Solution Procedure for the MMKP 159 7.1 Introduction ...... 159 7.2 Concept of a Search Neighborhood ...... 160 7.3 First-Level Tabu Search (FLTS) for the MMKP ...... 161 7.3.1 FLTS Implementation ...... 161 7.3.2 Empirical Tests for the FLTS Implementation ...... 163

v 7.3.3 Extensions of the FLTS for the MMKP ...... 172 7.4 Sequential Fan Candidate List (FanTabu) for the MMKP ...... 172 7.4.1 FanTabu Implementation ...... 175 7.4.2 Empirical Tests for the FanTabu Implementation ...... 177 7.5 CPCCP with Fan Candidate List (CCFT) for the MMKP ...... 188 7.5.1 CCFT Implementation ...... 188 7.5.2 Empirical Tests for the CCFT Candidate List Implementation ...... 188 7.6 Comparison of TS approaches with Reactive Local Search Approach (RLS) .... 192 7.6.1 RLS Approach ...... 192 7.6.2 Empirical Tests Comparing TS approaches with RLS Approach ...... 200 7.7 Summary ...... 201

8 Summary, Contributions, and Future Avenues 214 8.1 Summary and Contributions ...... 214 8.1.1 Legacy Heuristics and Test Problem Analysis ...... 215 8.1.2 Insights on Heuristic Performance Based on Problem Structure ...... 215 8.1.3 Empirical Science Leading to Theory ...... 216 8.1.4 New Test Set Development ...... 216 8.1.5 New Greedy Heuristics Development ...... 216 8.1.6 Metaheuristic Solution Procedure Development ...... 217 8.2 Future Avenues ...... 217

Appendices 219

A Additional Results from Empirical Tests 219

B Details on Cho Generation Approach 225

Bibliography 228

vi List of Figures

2.1 Different types of crossover operators ((Zalzala and Fleming 1997) and (Renner and Ekart 2003)) ...... 24 2.2 Genetic Algorithm Flowchart (Renner and Ekart 2003) ...... 25

4.1 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for MDKP Standard Test Problems ...... 78 4.2 Range of Correlation Values Between Constraint Coefficients for MDKP Standard Test Problems ...... 78 4.3 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for Hung and Fisk MKP Test Problems ...... 79 4.4 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for Martello and Toth MKP Test Problems ...... 80 4.5 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for Sinha and Zoltners MCKP Test Problems ...... 82 4.6 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for Moser’s MMKP Test Problems ...... 84 4.7 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for Khan’s MMKP Test Problems ...... 86 4.8 Range of Correlation Values Between Constraint Coefficients for Khan’s MMKP Test Problems ...... 88 4.9 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the correlated MMKP Test Problems generated from Khan’s MMKP Test Problem Generator ...... 88 4.10 Range of Correlation Values Between Constraint Coefficients for the correlated MMKP Test Problems generated from Khan’s MMKP Test Problem Generator .. 89

5.1 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for Available MMKP Test Problems ...... 99 5.2 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the generated MMKP Test Problem with 5 classes, 10 items, 5 knapsacks 99 5.3 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for New MMKP Test Sets ...... 100 5.4 Range of the Right-Hand Side Values of the Knapsack Constraints of Available MMKP Test Problems ...... 101 5.5 Range of the Right-Hand Side Values of the Knapsack Constraints of Analytical MMKP Test Sets ...... 102

vii 6.1 Flowchart for TYPE-based Heuristic ...... 129 6.2 Comparison of TYPE-based Heuristic according to Problem Type ...... 131 6.3 Flow Chart of NG V3 Heuristic (Cho 2005) ...... 132 6.4 Comparison of CH1 based on Number of Problems Solved to Optimal on New MMKP Test Sets ...... 143 6.5 Comparison of CH1 based on Number of Times Equal to Best on New MMKP Test Sets ...... 145 6.6 Comparison of All Heuristics based on Number of Problems Solved to Optimal on New MMKP Test Sets ...... 152 6.7 Comparison of All Heuristics based on Number of Times Equal to Best on New MMKP Test Sets ...... 155 6.8 Comparison of All Heuristics based on Percentage Relative Error on New MMKP Test Sets ...... 155

7.1 Comparison of FLTS based on Number of Problems Solved to Optimal on New MMKP Test Sets ...... 172 7.2 Comparison of FLTS based on Number of Times Equal to Best on New MMKP Test Sets ...... 174 7.3 Sequential Fan Candidate List (Glover and Laguna 1997) ...... 175 7.4 Comparison of FanTabu based on Number of Problems Solved to Optimal on New MMKP Test Sets ...... 186 7.5 Comparison of FanTabu based on Number of Times Equal to Best on New MMKP Test Sets ...... 186 7.6 Comparison of CCFT based on Number of Problems Solved to Optimal on New MMKP Test Sets ...... 197 7.7 Comparison of CCFT based on Number of Times Equal to Best on New MMKP Test Sets ...... 197 7.8 Comparison of All Heuristics based on Number of Problems Solved to Optimal on New MMKP Test Sets ...... 209 7.9 Comparison of All Heuristics based on Number of Times Equal to Best on New MMKP Test Sets ...... 209 7.10 Comparison of All Heuristics based on Percentage Relative Error on New MMKP Test Sets ...... 211

A.1 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the generated MMKP Test Problem with 10 classes, 10 items, 5 knapsacks 222 A.2 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the generated MMKP Test Problem with 25 classes, 10 items, 5 knapsacks 222 A.3 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the generated MMKP Test Problem with 5 classes, 10 items, 10 knapsacks 223 A.4 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the generated MMKP Test Problem with 10 classes, 10 items, 10 knapsacks223 A.5 Range of Correlation Values Between Objective Function and Constraint Coeffi- cients for the generated MMKP Test Problem with 25 classes, 10 items, 10 knapsacks224

viii List of Tables

4.1 Factors and Measures used in Empirical Analysis of the MKP Heuristics ...... 72 4.2 Factors and Measures used in Empirical Analysis of the MCKP Heuristics ..... 74 4.3 Factors and Measures used in Empirical Analysis of the MMKP Heuristics ..... 77 4.4 Theoretical and Practical Objective and Constraint Coefficient Correlation for Pisinger’s MKP Test Problems ...... 81 4.5 Theoretical and Practical Objective and Constraint Coefficient Correlation for Pisinger’s MCKP Test Problems ...... 83 4.6 Correlation Between Objective and Constraint Coefficients Analysis of Khan’s MMKP Test Problems ...... 85 4.7 Interconstraint Correlation Coefficients Analysis of Khan’s MMKP Test Problems . 87

5.1 Coded Combinations of Slackness Settings for 5 Knapsacks ...... 94 5.2 Coded Combinations of Slackness Settings for 10 Knapsacks ...... 95 5.3 Coded Combinations of Slackness Settings for 25 Knapsacks ...... 96 5.4 Correlation Between Objective and Constraint Coefficients Analysis of Available MMKP Test Problems ...... 97 5.5 Correlation Between Objective and Constraint Coefficients Analysis of New MMKP Test Sets ...... 98 5.6 Right-Hand Side Analysis of the Knapsack Constraints in Available MMKP Test Problems ...... 100 5.7 Right-Hand Side Analysis of the Knapsack Constraints of Analytical MMKP Test Sets ...... 101 5.8 Reported Results of the Legacy MMKP Heuristics on the 13 Available Test Problems 104 5.9 Obtained Results of the Legacy MMKP Heuristics on the 13 Available Test Problems 105 5.10 Performance Summary of the Legacy Heuristics on Khan’s 13 Available Test Problems106 5.11 Percentage Relative Error of Legacy Heuristics on Khan’s 13 Available Test Problems107 5.12 Results of the Legacy MMKP Heuristics on the Correlated Test Problems Generated using Khan’s Test Problem Generator ...... 108 5.13 Performance Summary of the Legacy Heuristics on Extra Generated Test Problems 110 5.14 Percentage Relative Error of Legacy Heuristics on Extra Generated Test Problems . 111 5.15 Number of times best heuristic for each of the Right-Hand Side Combinations for 5 classes, 10 items, 5 knapsacks ...... 113 5.16 Number of times best heuristic for each of the Right-Hand Side Combinations for 10 classes, 10 items, 5 knapsacks ...... 113 5.17 Number of times best heuristic for each of the Right-Hand Side Combinations for 25 classes, 10 items, 5 knapsacks ...... 114

ix 5.18 Number of times best heuristic for each of the Right-Hand Side Combinations for 5 classes, 10 items, 10 knapsacks ...... 114 5.19 Number of times best heuristic for each of the Right-Hand Side Combinations for 10 classes, 10 items, 10 knapsacks ...... 115 5.20 Number of times best heuristic for each of the Right-Hand Side Combinations for 25 classes, 10 items, 10 knapsacks ...... 116 5.21 Number of times best heuristic for each of the Right-Hand Side Combinations for 5 classes, 10 items, 25 knapsacks ...... 117 5.22 Number of times best heuristic for each of the Right-Hand Side Combinations for 10 classes, 10 items, 25 knapsacks ...... 118 5.23 Number of times best heuristic for each of the Right-Hand Side Combinations for 25 classes, 10 items, 25 knapsacks ...... 119 5.24 Hypothesis Test Results of the Influence of Correlation Structure on Heuristic Per- formance ...... 121 5.25 Hypothesis Test Results of the Influence of the Correlation Structure on Heuristic Performance based on Constraint Tightness for MMKP problems with 5 knapsacks, 10 items, and varied classes of 5, 10, and 25 ...... 123 5.26 Hypothesis Test Results of the Influence of the Correlation Structure on Heuristic Performance based on Constraint Tightness for MMKP problems with 10 knap- sacks, 10 items, and varied classes of 5, 10, and 25 ...... 124 5.27 Hypothesis Test Results of the Influence of the Correlation Structure on Heuristic Performance based on Constraint Tightness for MMKP problems with 25 knap- sacks, 10 items, and varied classes of 5, 10, and 25 ...... 126

6.1 MMKP Problems with 5 Knapsacks, Different Right-Hand Side Combinations, and Best Heuristic ...... 128 6.2 MMKP Problems with 10 Knapsacks, Different Right-Hand Side Combinations, and Best Heuristic ...... 129 6.3 Number of Times Equal to Best of the Legacy Heuristic Solutions according to Problem Type ...... 130 6.4 Solution Quality for CH1 on Khan’s 13 Available Test Problems ...... 136 6.5 Performance Summary of the CH1 with Legacy Heuristics on Khan’s 13 Available Test Problems ...... 137 6.6 Percentage Relative Error by CH1 and each Legacy Heuristic for the Khan’s 13 Available Test Problems ...... 138 6.7 Solution Quality for CH1 on Additional Test Problems generated via Khan’s Approach139 6.8 Performance Summary of the CH1 with Legacy Heuristics on Additional Test Prob- lems generated via Khan’s Approach ...... 140 6.9 Performance Summary of the CH1 with Legacy Heuristics on New MMKP Test Sets 141 6.10 Percentage Relative Error by CH1 and each Legacy Heuristic for the New MMKP Test Sets ...... 142 6.11 Computational Time in milliseconds by CH1 and each Legacy Heuristic for the New MMKP Test Sets ...... 144 6.12 Solution Quality for CH2 on Khan’s 13 Available Test Problems ...... 147 6.13 Performance Summary of All Heuristics on Khan’s 13 Available Test Problems .. 148 6.14 Percentage Relative Error of All Heuristics on Khan’s 13 Available Test Problems . 149

x 6.15 Solution Quality of All Heuristics on Additional Test Problems ...... 150 6.16 Performance Summary of the CH2 against Legacy Heuristics on Additional Test Problems ...... 151 6.17 Performance Summary of the CH2 with Legacy Heuristics on New MMKP Test Sets 153 6.18 Percentage Relative Error of All Heuristics for the New MMKP Test Sets ..... 154 6.19 Computational Time in milliseconds of All Heuristics on New MMKP Test Sets .. 156 6.20 Comparison of All Heuristics based on Percentage of Optimum on the Khan’s MMKP Test Sets ...... 157 6.21 Comparison of All Heuristics based on Percentage of Optimum on the New MMKP Test Sets ...... 158

7.1 Solution Quality for FLTS on Khan’s 13 Available Test Problems ...... 165 7.2 Performance Summary of the FLTS with Legacy Heuristics on Khan’s 13 Available Test Problems ...... 166 7.3 Percentage Relative Error by FLTS and each Legacy Heuristic for the Khan’s 13 Available Test Problems ...... 167 7.4 Solution Quality for FLTS on 24 Additional Test Problems ...... 168 7.5 Performance Summary of the FLTS with Legacy Heuristics on 24 Additional Test Problems ...... 169 7.6 Performance Summary of the FLTS with Legacy Heuristics on New MMKP Test Sets170 7.7 Percentage Relative Error by FLTS and each Legacy Heuristic for the New MMKP Test Sets ...... 171 7.8 Computational Time in milliseconds by FLTS and each Legacy Heuristic on the New MMKP Test Sets ...... 173 7.9 Solution Quality for FanTabu on Khan’s 13 Available Test Problems ...... 179 7.10 Performance Summary of the FanTabu and Other Heuristics on Khan’s 13 Available Test Problems ...... 180 7.11 Percentage Relative Error by FanTabu and Other Heuristics for the Khan’s 13 Avail- able Test Problems ...... 181 7.12 Solution Quality for FanTabu and Other Heuristics on 24 Additional Test Problems 182 7.13 Performance Summary of the FanTabu with Other Heuristics on 24 Additional Test Problems ...... 183 7.14 Performance Summary of the FanTabu with Other Heuristics on New MMKP Test Sets ...... 184 7.15 Percentage Relative Error by FanTabu and Other Heuristics for the New MMKP Test Sets ...... 185 7.16 Computational Time in milliseconds by FanTabu and Other Heuristics on the New MMKP Test Sets ...... 187 7.17 Solution Quality for CCFT on Khan’s 13 Available Test Problems ...... 189 7.18 Performance Summary of the CCFT with Other Heuristics on Khan’s 13 Available Test Problems ...... 190 7.19 Percentage Relative Error by CCFT and Other Heuristics on the Khan’s 13 Available Test Problems ...... 191 7.20 Solution Quality for CCFT on 24 Additional Test Problems ...... 193 7.21 Performance Summary of the CCFT with Other Heuristics on 24 Additional Test Problems ...... 194

xi 7.22 Performance Summary of the CCFT with Other Heuristics on New MMKP Test Sets 195 7.23 Percentage Relative Error of CCFT and Other Heuristics for the New MMKP Test Sets ...... 196 7.24 Computational Time in milliseconds by CCFT and Other Heuristics for the New MMKP Test Sets ...... 198 7.25 Solution Quality Comparisons Among All Heuristics on Khan’s 13 Available Test Problems ...... 202 7.26 Performance Summary of All Heuristics on Khan’s 13 Available Test Problems .. 203 7.27 Percentage Relative Error Achieved by All Heuristics on Khan’s 13 Available Test Problems ...... 204 7.28 Solution Quality of All Heuristics on 24 Additional Test Problems ...... 205 7.29 Performance Summary of All Heuristics on 24 Additional Test Problems ...... 206 7.30 Performance Summary of All Heuristics on New MMKP Test Sets ...... 207 7.31 Percentage Relative Error Achieved by All Heuristics on the New MMKP Test Sets 208 7.32 Computational Time in milliseconds of All Heuristics on the New MMKP Test Sets 210 7.33 Comparison of All heuristics based on Percentage of Optimum on the Khan’s MMKP Test Problems ...... 212 7.34 Comparison of All Heuristics based on Percentage of Optimum on the New MMKP Test Sets ...... 213

A.1 New MMKP Test Problem Sets ...... 219 A.2 Exact Solutions for the New MMKP Test Problem Sets ...... 220 A.3 Exact Solutions for the New MMKP Test Problem Sets (Continued) ...... 221

B.1 Problem Set Parameters for Cho (2005) Competitive MDKP Test Problem Sets .. 227

xii Acknowledgement

I would like to express my deep sense of gratitude to Dr. Raymond R. Hill for his exceptional guidance and support. He has been a great source of inspiration and strength throughout the course of my graduate studies, master’s thesis research, and this dissertation work. His dedication and patience are the virtues which I will cherish for a long time and would strive to achieve them in my career. I am indebted to his encouragement and support in my research efforts from the beginning to the end. It was a great pleasure and honor to work under him.

I would like to thank my dissertation committee members, Dr. Moore, Dr. Zhang, Dr. Kinney, and Dr. Rizki for taking time to serve on my committee and provide valuable suggestions for my work.

Finally, I extend my great thanks to my family and close ones. I am grateful to my father, mother, Jatin, and Amita for their support, kindness, love and affection, which kept me going in achieving my career goals. I am deeply indebted to my parents for their active interest and encour- agement in my education. My sincere thanks to Jatin for all the motivation, encouragement, and support throughout my studies and research at graduate school.

xiii Dedicated to

Mummy, Daddy, Jatin, and Amita

xiv 1. Introduction

1.1 Discussion of Knapsack Problems

Heuristic techniques have long been used to quickly solve many combinatorial optimization prob- lems. Finding an exact solution for an in real practice is sometimes less practical in comparison to using an easily computed method of acquiring near-optimal solutions.

When the problems grow larger in size, obtaining the exact solutions can take excessive computa- tional time and storage space. In such cases, the results obtained by a complex, time consuming method may be no more attractive than near optimal solutions. Further considering the imprecision of the real-world problem data, and the approximate nature of some formulations, obtaining a pre- cise solution in reality may seem meaningless. Obtaining a near-optimal solution in a reasonable computational time may be beneficial and more practical (Cho 2002).

Heuristic optimization algorithms seek good feasible solutions to optimization problems where the complexity of the problem or the limited time available for its solution does not practically allow

finding a guaranteed exact solution. Unlike exact algorithms, where time-efficiency is the main measure of success, two important issues that arise while evaluating heuristics are: how fast can solutions be obtained and how close are the solutions to optimal (Rardin and Uzsoy 2001).

The binary knapsack problem (KP) is a non-trivial problem comprising binary variables, a single constraint with positive coefficients and binary restrictions on the vari- ables. The KP is a difficult class of problems (Kellerer et al. 2004). The KP has been used to

1 model various decision making processes and finds a variety of real world applications: proces- sor allocation in distributed computing systems, cutting stock, capital budgeting, project selection, cargo loading, and resource allocation problems. Industrial applications find the need for satisfying additional constraints such as urgency of requests, priority and time windows of the requests, and packaging with different weight and volume requirements. These necessities lead to the variants and extensions of knapsack problems. Multi-Dimensional Knapsack Problems (MDKP), Multiple

Knapsack Problems (MKP), Multiple Choice Knapsack Problems (MCKP), and Multiple-choice

Multidimensional Knapsack Problems (MMKP) are a few of the KP variants.

The MDKP consists of selecting a subset of n given objects (or items) such that the total profit achieved by the selected objects is maximized while a set of knapsack constraints are satisfied.

The MKP, MCKP, and MMKP involve the notion of sets or classes of items under consideration.

These problems also find practical applications and form a more difficult set of KP problem types.

Intratheater airlift problem, budgeting problem, adaptive multimedia problem, logistic scheduling problem, network design problem, warehouse container storage problem, resource allocation prob- lem for satellites, airlift loading problem, and multiple container packing problem are applications where KP variants play an important role. Many heuristic methods have been applied to the KP and its variants providing solutions close to the true optimal.

1.2 Overview of the Dissertation Research

Heuristic algorithms have been developed by many researchers to solve the knapsack problem and its variants and these have been tested on different problem sets. Not all the algorithms used to solve a given problem to optimality or near-optimality are equivalent in their performance. These performance differences could either be computational time, memory requirements, the solution quality, or the complexity of the algorithms. Some heuristic algorithms outperform other heuristic algorithms on certain types of test problems while they do not perform well on other test problems.

Little research has examined why certain algorithms perform well on certain test problems while

2 not so well on other test problems. There has been little work done to gain knowledge of the test problem characteristics and their effects on algorithm performance with an objective of generalizing these performance insights into improved heuristics for practical problems.

The organization of this dissertation is as follows. Chapter 2 briefly introduces the MDKP and discusses various approaches to solve the problem. This chapter provides background about branch- and-bound, dynamic programming, greedy heuristics, transformation heuristics, and metaheuristic approaches to solve the MDKP. Chapter 3 discusses and provides a background about the extensions and variants of the knapsack problems: the MKP, MCKP, and MMKP. This chapter also discusses the formulations and the heuristic approaches for practical applications of the variants of the KP.

Chapter 4 discusses the heuristic procedures and an analysis of existing test problems used for the variants of the knapsack problems. Chapter 5 discusses problem generation methods for the MMKP, develops a test problem generation method for the MMKP, and provides an empirical analyses of the heuristic solution procedures for the MMKP. Chapter 6 presents new greedy heuristics developed for the MMKP. Chapter 7 presents the metaheuristic approaches based on tabu search for solving the MMKP. Chapter 8 summarizes the research study, outlines the contributions, and identifies areas for further research.

1.3 Contributions of the Dissertation Research

This research will focus on the MMKP, a complex variant of the knapsack problem. MMKP prob- lems are complex and this complexity increases with the number of variables and the type of con- straints, even to the point where constraints could make the problem non-linear. For example, in the case of a warehouse storage problem, the number of items to be stored, the space requirement of each item, and any incompatibility between items makes the problem complex and possibly non- linear. For a real time airlift problem, when adding additional constraints that vary according to aircraft, like 3-dimensional aspects, aircraft center of gravity, aircraft floor loading considerations, and hazardous cargo restrictions, the problem can quickly become hard to solve. The increasing

3 complexity of such problems applied to the airlift logistics problem motivates this research to con- sider the variants of the knapsack problem.

The contributions of this research are fourfold. The first contribution is to show how empirical science can lead to theory (Hooker 1994), as also evidenced in Cho (2005). The research involves collecting, coding, and testing the existing heuristics for the MMKP. This empirical analysis of the current heuristics will lead to new problem search space knowledge and the development of new heuristics. Past research has generally failed to study the reason why certain heuristic procedures outperform other heuristic procedures as a function of the test problem structure. The second contri- bution of the research study is to identify the performance traits of heuristic procedures and develop a more diverse set of MMKP test problems considering important problem characteristics like the number of variables, number of constraints, constraint correlation structure, and constraint slackness levels. The third contribution is the development of new greedy heuristic approaches for solving the

MMKP. This development involves examining the existing heuristics performance against a new problem test set and using the subsequent analyses to develop new heuristic approaches. The fourth contribution is to develop improved metaheuristic solution procedures for the MMKP using the im- proved greedy heuristic approaches to initialize searches or to improve local search neighborhoods.

4 2. Multi-Dimensional Knapsack Problems

2.1 Introduction

Suppose a subset of n items are to be packed into a knapsack of capacity c. Each item j has a profit

pj and a weight wj. The 0-1 knapsack problem (KP) maximizes the total profit of the selected items

without exceeding the knapsack capacity c. The 0-1 KP is formulated as:

Maximize, n X Z = pjxj (2.1) j=1 subject to,

n X wjxj ≤ c, (2.2) j=1

xj ∈ {0, 1}, j = 1, ..., n (2.3)

Equation (2.1) provides the total profit of selecting item j and Equation (2.2) ensures the knap- sack constraint is satisfied. Equation (2.3) is the binary selection requirement on each decision variable xj such that xj = 1 if selected and xj = 0 if not selected.

The 0-1 KP problem may have a collection of different resource constraints. Such a problem is called the Multidimensional Knapsack Problem (MDKP). A set of n items are to be packed simultaneously in m knapsacks with capacities ci for i = 1, ..., m. Each item j has a profit pj and

5 weight wij associated with knapsack i. The objective of the problem is to maximize the total profit of the selected items. The MDKP problem is formulated as:

Maximize, n X Z = pjxj (2.4) j=1 subject to,

n X wijxj ≤ ci, i = 1, ..., m (2.5) j=1

xj ∈ {0, 1}, j = 1, ..., n (2.6)

Equation (2.4) calculates the total profit of selecting item j and Equation (2.5) ensures each knapsack constraint is satisfied. Equation (2.6) is the binary selection requirement on decision

Pn variable xj such that xj = 1 if selected and xj = 0 if not selected. The value Si = ci/ j=1 wij is called the slackness ratio of a constraint. Small values of Si mean constraints are tight; fewer items can fit in the knapsack. Large values of Si mean constraints are loose; more items can fit in the knapsack.

Some real world applications of the MDKP are the cutting stock, capital budgeting, project

selection, cargo loading, and resource allocation problems. Since MDKP is NP-hard (Frieze and

Clarke 1984), the computation time increases rapidly as the problem size increases. As a result, a wide variety of heuristics have been applied to the MDKP. The remainder of this chapter briefly explains some of these solution approaches: Branch-and-Bound, Dynamic Programming, Greedy

Heuristics, Transformation Heuristics, and Metaheuristic approaches.

2.2 Branch-and-Bound Approach

The branch-and-bound (B & B) approach is an exact and potentially exhaustive enumerative ap- proach used to determine an optimal solution for integer programming problems. The B & B approach is based on a complete enumeration of the solution space using specifically generated

6 constraints added to a problem to prune off regions of the solution space. In many cases, only a small subset of the feasible solutions are explicitly enumerated. Any solution subspace which is determined to not contain the optimal solution is excluded and thus its feasible solutions are noted as enumerated implicitly. We say the excluded subspace is pruned.

The B & B algorithm is based on two basic principles: branching and bounding. Consider any problem with a finite solution space. In the branching part, a given subset of the solution

space is decomposed into smaller subsets by including additional constraints. The union of these

subsets must result in the complete solution space. The process of decomposition is repeated until

in the limit, each solution subset contains a single feasible solution. The global optimal solution to

the given problem is the best of all considered solutions. In a worst-case instance, every potential

solution is considered. This process is represented by an enumeration tree. The leaves of the tree

correspond to the points examined in a complete enumeration.

Complete enumeration is not practical for problems where the number of variables, or nodes,

in an integer program exceeds 20 or 30. Bounding plays an important role by intelligently avoiding

complete enumeration. This part of the B & B algorithm derives lower and upper bounds for a

given subset of the solution space. The lower bound can be chosen as the best integer solution

encountered so far in the search process. In case of no solutions being considered, the objective

value of a heuristic solution can be used as a lower bound. An upper bound is used to prune parts of

the search space. Pruning by optimality, pruning by bound, and pruning by infeasiblity are the three

ways to prune the tree and thus enumerate a large number of solutions implicitly (Wolsey 1998).

In the following maximization example, suppose N = (U, L) = (UpperBound, LowerBound) is associated with a node in a B & B enumeration tree.

• Pruning by optimality: Suppose node N = (27, 13) is decomposed into two nodes N1 =

(20, 20) and N2 = (25, 15). The values in the parantheses indicate upper and lower bounds

on the nodes. Since the upper and the lower bound values on N1 are equal, there is no

further need to explore N1. Hence, the rest of branch N1 of the enumeration tree can be

7 pruned by optimality; the solution at N1 is the best for that subspace including any subsequent

subspaces.

• Pruning by bound: Suppose node N = (27, 13) is decomposed into two nodes N1 = (20, 18)

and N2 = (26, 21). For N2, the optimal value is at least 21 and the upper bound on N1 is

20. No optimal solution can lie in N1 because the solution in N1 cannot exceed those in N2.

Thus, branch N1 of the enumeration tree is pruned by bound.

• Pruning by infeasibility: Suppose node N = with upper bound 40 is decomposed into two

nodes N1 = (24, 13) and N2. However, N2 has no feasible solution in the subspace. The N2

branch can be pruned by infeasibility.

The B & B algorithm is a recursive procedure using tree-search facilities. This recursive struc-

ture employs different search techniques like the depth-first search or best-first search. The poten-

tially exponential growth in computations as problem size increases makes the B & B approach

less practical when the number of feasible solutions examined is very large unless early termination

heuristics are used.

Pierce (1968) used combinatorial programming based on two problem-solving procedures: use of a controlled enumerative procedure for considering all potential solutions and the elimination from explicit consideration of particular potential solutions due to dominance, bounding and fea- sibility considerations. Pierce observed that the search directed first to the discovery of a feasible solution and then to successively better feasible solutions until an optimal solution is determined.

Thesen (1975) developed a B & B algorithm for the MDKP. He defined a recursive tree struc- ture for B & B with the objective of reducing computer storage requirements. Thesen’s algorithm was tested against algorithms by Geoffrion (1967) and Zionts (1972). The test results indicated that

Thesen’s algorithm took one tenth of the solution time of the other two algorithms. Results also indicate solution times increased for all three algorithms with tighter problem constraints (lower

Si values). The magnitude and the rate of increase in solution times with the number of variables

8 was less for Thesen’s algorithm in comparison with the other two algorithms. Thesen’s algorithm showed improved execution times for larger test problems but could not solve problems with more than 50 variables.

Shih (1979) proposed a modified B & B approach to solve the MDKP. A solution was found by an LP relaxation of the KP, obtained using each of the m constraints and then using the minimum value among the resulting m objective function values as a bound for each node. Shih tested his algorithm on randomly generated problems with 5 constraints and 30-90 variables. Shih’s proposed algorithm measured 13 minutes versus 380 minutes taken by a Balas approach.

Balas and Zemel (1980) developed an algorithm for KP based on three concepts. The first concept focuses on the core problem which is based on the knowledge of an optimal solution.

Suppose the items are ordered in a decreasing order of their cost/weight (pj/wj) ratios and let an

∗ ∗ ∗ optimal solution vector be given by x . Define a := min{j | xj = 0} and b := max{j | xj = 1} a−1 a−1 X X in an optimal solution. If p¯ = pj and w¯ = wj, then the core is represented by the items in j=1 j=1 the interval C = {a, ..., b} and the core problem is defined as

Maximize, X Z = pjxj +p ¯ (2.7) j∈C subject to,

X wjxj ≤ c − w,¯ (2.8) j∈C

xj ∈ {0, 1}, j ∈ C (2.9)

The core problem is the subproblem in those variables whose cost/weight ratio falls between

the maximum and the minimum cj/wj ratio for which xj has a different value in an optimal solution

to the KP from that in an optimal solution to the LP relaxation (LKP). The size of the core is a small

fraction of the entire problem size and does not increase rapidly with problem size (Pirkul 1987).

Identifying the core requires solving the KP. Some authors use an approximation in solving the LKP

((Pisinger 1999a) and (Pirkul 1987)). A second concept focuses on the binary-search type method

9 to solve the LKP without sorting the variables. A third concept focuses on the development of a heuristic to solve the KP to optimum. This heuristic approach uses implicit enumeration to find the optimal solution of the approximated subproblem. Other research, discussed later, refers back to this core problem concept.

Martello and Toth (2003) used greedy heuristic algorithms based on the solutions to the La- grangian and surrogate relaxations, and a reduction approach to determine the optimal value of a high percentage of variables. These results were then used in a B & B algorithm to find the exact solution of 2KP.

2.3 Dynamic Programming

Dynamic programming is an exact and exhaustive enumerative approach that can find the optimal solution for MDKP by solving a small subproblem and then extending this solution iteratively until the complete problem is solved. Assuming that the optimal solution of the knapsack problem was computed for a subset of the items and all capacities c, add an item to this subset and check whether or not the optimal solution needs to be changed for this enlarged subset. This can be found by using the solutions of the knapsack problem with smaller capacity and computing the possible change of the optimal solutions for all capacities. This procedure of adding an item to the subset is repeated until all items have been considered and the overall solution is obtained.

Kellerer et al. (2004) explain the concept of Dynamic Programming for KPs as follows. Con- sider the subproblem of a KP with item set {1, ..., j} and knapsack capacity d ≤ c. The knapsack problem for {j = 1, ..., n} and {d = 0, ..., c} is defined as:

Maximize, j X Zj−1(d) = plxl (2.10) l=1

10 subject to,

j X wlxl ≤ d, (2.11) l=1

xl ∈ {0, 1}, l = 1, ..., j (2.12)

with optimal solution Zj−1(d) and binary decision variable xl such that xl = 1 if selected and xl = 0 if not selected. If the optimal solution value Zj−1(d) for all capacity d = {0, ..., c} is

known, then consider an additional item j added to the subset of items. The corresponding optimal solution value Zj(d) can be found using the recursive formula:

( Zj−1(d) if d < wj, Zj(d) = (2.13) max{Zj−1(d),Zj−1(d − wj) + pj} if d ≥ wj

Weingartner and Ness (1967) developed algorithms using the dynamic programming frame- work to solve MDKP. They introduced a dual approach in dynamic programming which starts with all the items included in a knapsack. Their approach systematically removes items from the knap- sack until feasibility is obtained. The size of the problem was reduced by heuristically determining lower and upper bounds. The lower bound elimination method is used to determine the lower bound.

This method eliminates any solution strategy at any stage from further consideration that, if it were augmented with all the items remaining to be considered, would yield a solution less than the best solution already achieved. This method requires finding the best feasible solution. The upper bound is calculated by a method.

Soyster et al. (1978) developed an algorithm with an implicit enumeration strategy applied to the MDKP subproblems. The subproblems were generated by the LP-relaxation of the MDKP.

They solved the LP-relaxation of the MDKP to optimality and partitioned the optimal solution into fractional and integer components. They applied implicit enumeration to solve the subproblems with fractional valued variables obtained in the linear program (LP). At each subsequent iteration, the size of the subproblem was increased by adding a variable. By applying implicit enumeration to subsequent subproblems, they achieved a non-increasing sequence of optimal values of the LP

11 which served as the upper bound for the IP. The algorithm terminated when the difference between the current solution and the current upper bound was less than one. The Soyster et al.’s (1978) algorithm follows the dynamic programming approach of adding a variable to the subproblem and solving it until it meets the termination criterion.

Pisinger (1997) presented a minimal algorithm for the knapsack problem where the enumerated core size is minimal. The minimal algorithm was based on a breadth-first, dynamic programming approach for the enumeration as the core problem is gradually extended. The enumerated core prob- lem is the smallest possible symmetrical core problem which is enumeratively solvable. Pisinger uses stronger upper bounds for reducing the items that are not present in the core. This algorithm gave stable solution times in computational experiments conducted on randomly generated uncor- related, weakly correlated, strongly correlated, and subset-sum data instances (these types of test problems are discussed in later chapters).

Martello et al. (1999) developed an algorithm based on the dynamic programming approach by

Pisinger (1997) using the cardinality bounds approach by Martello and Toth (1997) to generate valid inequalities. They used surrogate relaxation to solve the problem to optimality. The new algorithms reduced the computation time and memory space requirements compared to previous efforts.

Bertsimas and Demir (2002) presented the Approximate Dynamic Programming (ADP) ap- proach for the MDKP. They approximate the optimal value function by using a parametric method

(ADP-P), a nonparametric method (ADP-N), and a base-heuristic (ADP-BH). In the dynamic pro- gramming formulation, the optimal value function is obtained by taking the maximum of the pre- ceding objective values of the subproblem. The ADP-BH estimates the optimal value function by the solution value of a suboptimal methodology to the corresponding subproblem. This suboptimal methodology is the base-heuristic approach. The ADP-P and ADP-N algorithms are based on the research results from the probabilistic analysis of the MDKP in Frieze and Clarke (1984). The com- putational results showed that their ADP approach using the new heuristic was fast and accurate compared to the ADP approach based on parametric and nonparametric methods.

12 2.4 Greedy Heuristics

Heuristic algorithms are designed to find good, but not necessarily optimal solutions, quickly. A very simple greedy approach for the knapsack problem can be obtained by considering the profit to weight ratio (pj/wj) of every item and trying to select items with the highest ratio for the knapsack.

This approach seeks to generate highest profit while consuming little knapsack capacity.

Consider the KP of equations (2.1) to (2.3). The greedy heuristic considers arranging the items in a decreasing order of profit to weight ratios (p1/w1 ≥ p2/w2 ≥ ... ≥ pn/wn). A greedy algo- rithm begins with an empty knapsack, considers the items arranged in decreasing profit to weight ratios, and adds every item under consideration into the knapsack until the capacity constraint de- noted by Equation (2.2) is violated.

Cabot (1970) used a combination of enumeration and elimination techniques for solving lin- ear equalities. The enumeration method is applied to the MDKP by treating each constraint as one would in a one-dimensional knapsack problem and then generating bounds by looking at the intersection of the bounds generated by each set of constraints. This technique was tested on 50 randomly generated problems with ten variables and five constraints to determine the effect that the order of the variables had on the solution time. The results were quite poor in terms of the computation time, as the bounds generated were poor.

Senju and Toyoda (1968) present an approach for finding approximate solutions for the MDKP using dual methods. Starting with all items selected, they used a Dual Effective Gradient Method to evaluate the utility of every element and deselect items until feasibility was obtained. The Dual

Effective Gradient Method starts with an infeasible solution and proceeds to search for a feasible solution. Toyoda (1975) introduced the Primal Effective Gradient Method to obtain approximate solutions to MDKP programming problems. The Primal Effective Gradient Method starts with a feasible solution (nothing selected) and then attempts to improve upon this solution. It assigns measures of preferability to each decision variable. This measure corresponds to the profit per unit of aggregate necessary resource known as the pseudo-utility ratio. The pseudo-utility ratio is the

13 effective gradient given as (pj/(uwj)) where u is a multiplier for a column vector xj that weighs the resource cost per constraint against the level of constraint utilization. The most preferred variables are set from zero to one. This method gives very good approximate solutions for MDKP problems.

Loulou and Michaelides (1979) developed four different heuristic methods by varying an index of aggregate consumption of resources by each xj also known as the penalty vector which is used for calculating the pseudo-utility ratio. These heuristic methods were used to obtain approximate solutions to MDKP. They tested their algorithms on various problems of different sizes. Their analyses showed that their methods yielded better results than Toyoda’s approach (upon which their four methods were based). The average relative errors were less than 1% of the optimal solution to each of the test problems used.

Balas and Martin (1980) devised the pivot and complement method to solve 0-1 problems. The procedure first solves the linear program relaxation and then performs a sequence of pivots aimed at exchanging non-basic slack variables for a basic 0-1 variable at a minimal cost. Finally, a local search based on complementing certain sets of 0-1 variables tries to further improve the solutions.

The experimental results carried out on 92 test problems showed that the time taken to run the linear program was very high. Optimal results were obtained for 49 of the problems.

Lee and Guignard (1988) developed and tested a heuristic for MDKP. Their heuristic involves two phases. Phase I finds a feasible solution by applying a modified Toyoda (1975) heuristic and reduces the problem size by fixing variables using LP-relaxation. The modified Toyoda’s method consists of selecting as many variables as possible each iteration until the solution obtained is fea- sible (versus selecting only one variable each iteration as in the original Toyoda’s heuristic). Phase

II of the heuristic approach improves the solution obtained in Phase I by complementing certain sets of variables using the Balas and Martin (1980) method. On comparing the Lee and Guignard

(1988) algorithm with Toyoda (1975) and Magazine and Oguz (1984), Lee and Guignard’s algo- rithm yielded better solutions in less computation time while the Balas and Martin (1980) approach showed better performance over the Lee and Guignard (1988) approach in terms of solution quality.

14 Vasquez and Hao (2001) presented a hybrid approach for the MDKP; a combination of linear programming and tabu search. The linear program is used to obtain “promising” continuous optima and then tabu search is used to carefully and efficiently explore binary areas close to these con- tinuous optima. They considered several additional constraints such as the hyperplane constraint, geometrical constraint, and qualitative constraint to solve the MDKP problems. This algorithm showed significant improvements when tested on more than 100 benchmark instances.

Cho et al. (2003b) conducted an empirical analysis of three legacy heuristic methods: Toyoda

(1975), Senju and Toyoda (1968), and Loulou and Michaelides (1979) using Hill and Reilly’s two constraint MDKP test set (Hill and Reilly 2000), a robust problem set in terms of varied constraint slackness and correlation structure. The empirical analysis led to finding a “best performer” among the heuristics for particular problem types. A new type-based heuristic was developed based on problem pre-processing and selecting a preferred heuristic, the first approach of its kind for MDKP.

Computational results indicated the type-based heuristic showed slightly better and more consistent performance in comparison to the three different heuristic methods for varied test problems.

Cho et al. (2003a) developed a heuristic using Lagrange multipliers to transform a MDKP into an unconstrained problem, categorizing as selected, unselected or uncertain variables, and then solves the remaining core problem using a greedy heuristic. The greedy heuristic used is the Toyoda

(1975) heuristic. The heuristic developed accounts for important problem characteristics such as constraint slackness and correlation structure. The algorithm is tested on Hill and Reilly’s test set

(Hill and Reilly 2000). The results show the new heuristic has better performance at all constraint slackness levels when compared to a suite of competitor heuristics.

Cho et al. (2006) developed a new primal effective gradient heuristic. The new gradient func- tion was obtained by combining the characteristics of heuristics by Senju and Toyoda (1968) and

Loulou and Michaelides (1979). Subsequent testing using Hill and Reilly’s test set (Hill and Reilly

2000) demonstrated improved performance 69% of the time compared to the results obtained by the legacy approaches. Further, the new approach dominated the legacy heuristic approaches when

15 applied to modified versions of benchmark test problems and to problems specifically created to provide harder problem test instances while mimicking the size of current benchmark problems.

2.5 Transformation Heuristics

KP is a MDKP with m = 1. KP is well studied and there exist more exact algorithms and heuris- tics for KP than MDKP (Cho 2005). A transformation heuristic with relaxation techniques like

Fisher’s Lagrangian relaxation, Glover’s Surrogate relaxation, and composite relaxation can reduce the MDKP to a KP approximation of the MDKP. Solutions to the KP approximation give heuristic solutions to the original problem.

Consider the MDKP formulation equations (2.4) to (2.6) with m1 additional constraints:

n X wkjxj ≤ ck, k = m + 1, ..., m + m1 (2.14) j=1

Define L(λ) as the Lagrangian relaxation of the MDKP, Equations (2.4) through (2.6) and

(2.14). Let λ = (λ1, λ2, ..., λm) be a vector of non-negative Lagrangian multipliers. A Lagrangian formulation moves difficult constraints into the objective function. We assume without loss of gen- erality that constraints (2.5) are difficult. The Lagrangian relaxation of the MDKP, L(λ), becomes:

Maximize, n m n X X X Z(L(λ)) = pjxj − λi( wijxj − ci) (2.15) j=1 i=1 j=1 subject to,

n X wkjxj ≤ ck, k = m + 1, ..., m + m1 (2.16) j=1

xj ∈ {0, 1}, j = 1, ..., n (2.17)

λi ≥ 0, i = 1, ..., m (2.18)

Note constraint (2.5) is in the objective function as the penalty term:

16 m n X X λi( wijxj − ci) (2.19) i=1 j=1 Violations of constraint (2.5) results in the penalty term (2.19) reducing the objective function.

The objective function value of the original MDKP is not larger than the objective function value of the relaxed problem (Fisher 1981). Thus, for a vector of non-negative Lagrangian multipliers, the optimal solution to the relaxed problem is an upper bound on the original problem (Fisher 1981).

A surrogate relaxation replaces some set of constraints with a linear combination of those constraints. Glover (1968) discusses surrogate bounds and Glover (1977) uses a surrogate-based heuristic to solve the MDKP.

Let S(µ) denote the surrogate relaxation of the MDKP problem represented by Equations (2.4) through (2.6). Assume that the m capacity constraints are difficult to solve. These constraints can be merged into a single constraint instead of removing them from the problem. Let µ =

(µ1, µ2, ..., µm) represent a vector of non-negative multipliers. The surrogate relaxed problem is given as:

Maximize, n X Z(S(µ)) = pjxj (2.20) j=1 subject to, m n m X X X µi( wijxj) ≤ µici, (2.21) i=1 j=1 i=1

xj ∈ {0, 1}, j = 1, ..., n (2.22)

µi ≥ 0, i = 1, ..., m (2.23)

All feasible solutions to the original MDKP problem are also feasible for the surrogate relaxed problem. For all the non-negative multipliers µ, the optimal solution for the surrogate relaxed problem is an upper bound to the original problem (Glover 1977).

Composite relaxation combines Lagrangian relaxation and surrogate relaxation. The goal is to make computations easier while the Lagrangian relaxation penalty keeps the solution close to

17 the (Kellerer et al. 2004). Consider the MDKP problem of Equations (2.4) through

(2.6). Let Co(λ, µ) denote the composite relaxation, formulated as:

Maximize, n m m X X X Z(Co(λ, µ)) = (pj − λiwij)xj + λici (2.24) j=1 i=1 i=1 subject to, m n m X X X µi( wijxj) ≤ µici, (2.25) i=1 j=1 i=1

xj ∈ {0, 1}, j = 1, ..., n (2.26)

Gavish and Pirkul (1985) tested and compared Lagrangian, surrogate, and composite relaxation bounds for the MDKP. Computational tests indicated that composite relaxation provided the best bounds. The authors then suggest rules for reducing the problem size. They solve the reduced problem by a modified B & B approach. They compared their algorithm performance to Shih

(1979) and Loulou and Michaelides (1979) based on computing times and solution quality. They found that their algorithm improved solution quality with shorter computing times.

Osorio and Hernandez (2004) generated new MDKP constraints by using constraint pairing and initial integer solution to combine the dual surrogate constraint with the objective function. These constraints were used for generating different types of logic cuts which were included in the B &

B framework to solve the problem to optimality. The developed approaches were tested on smaller and larger sets of MDKP test problem instances developed by Weingartner and Ness (1967), Senju and Toyoda (1968), Shih (1979), and Chu and Beasley (1998). The comparative analysis indicated that the logic cuts in the B & B model improved the solution performance reducing the number of nodes in the search tree.

2.6 Metaheuristic Approaches

Metaheuristics are approximate algorithms that combine basic heuristic methods into higher level frameworks to efficiently and effectively explore the search space (Blum and Roli 2003). Meta-

18 heuristics, a term coined by Glover and Greenberg (1986), is derived from the two Greek words; heuristics which means “to find” and meta which means “beyond”. Various heuristic approaches mostly based on analogies with natural phenomena, like tabu search, genetic algorithms, and sim- ulated annealing were developed in the later 1970s. have been used to tackle many difficult combinatorial optimization problems. A good metaheuristic implementation is likely to solve a combinatorial optimization problem in a reasonable computation time (Gendreau and Potvin

2005) and empirical evidence suggests the solutions are generally of high quality.

2.6.1 Tabu Search (TS)

Glover and Greenberg (1986) proposed tabu search (TS) as an extension of local search (LS) to allow LS to overcome local optima. TS is a metaheuristic procedure, which uses a local search subroutine in order to find local optima. The basic principle of TS is to guide LS when it comes across by allowing non-improving moves. This involves explicitly preventing the

LS cycling back to previously visited solutions by using memory structures known as tabu lists. A basic TS combines LS with short term memory. TS has been applied to a wide range of optimization problems to include various classes of integer problems like planning, design, and scheduling.

Search space and the neighborhood structure are two basic elements of a TS heuristic. The search space is the space of all possible solutions that can be visited during a search. TS is based on tabu structures used to prevent LS cycling among solutions when moving away from local optima regions via non-improving LS moves. TS prevents the search from retracing itself by tracking moves accomplished based on move attributes employed. If potential moves have been used, the move is declared tabu. These tabu moves force the search away from previously visited portions of search space, providing more extensive exploration of the search space. Tabu lists can be fairly simple, can be complex, or may use hashing techniques to maintain a complete search history of all solutions visited.

The TS search process is controlled by the length of the tabu list known as tabu tenure. Smaller

19 tabu tenures make the search concentrate on smaller areas of the search space while larger tabu tenures force the search to explore larger areas. Robust algorithms can vary the tabu tenure during the search process. Tabu tenures can be dynamically changed; in the case of higher diversification, tabu tenure is increased if there are repetitive solutions, and in the case of intensification, tabu tenure is decreased if there are no improvements in the solutions (Blum and Roli 2003).

Although a tabu structure can be a powerful search mechanism, it can prohibit attractive move attributes which may in turn lead to a stagnation in the search process. Thus, aspiration criteria for tabu structures allow the TS to override tabu status. The simplest and the most commonly used aspiration criteria is to allow a tabu move if it results in a new best known solution (Glover and

Kochenberger 2002).

Tabu list structures are called short-term memory structures but are often augmented with other memory structures. Various data pertaining to the search are compiled to create memory. These data are then used to make strategic changes to the TS process.

Intermediate-term memory is used to intensify a tabu search in promising areas. The most common form of intermediate memory is the elite list. The elite list is simply a list of good solutions compiled during the TS process. These good solutions may in fact be good solutions evaluated by the search process but not selected in the TS move. Intermediate-term functions involve restarting the TS from solutions stored in the elite list.

Recency, frequency, quality, and influence data are examples of long term memories. Re- ceny based memory records the most recent iteration for every solution. Frequency based memory records the number of times a solution has been visited. These data provide information about whether or not the search is confined to a specific region. Quality based memory gathers and extracts information of the search history to identify good solution components. Influence based memory has information about the choices made during the search process and indicate the choices that have been critical (Blum and Roli 2003).

While the short-term and intermediate-term memory components of TS intensify the search,

20 long-term memory structures are used to diversify the search into new areas of the search space. A key development of an effective TS approach involves the implementation of rules or triggers that exploit memory to systematically intensify and diversify the search.

A tabu search approach for solving zero-one mixed integer programming problems exploiting the extreme point property of zero-one solutions is due to Løkketangen and Glover (1998). In the first part of their research, a basic “first level” TS is explored. The second part involves the extension of the first-level TS mechanisms to include developing diversification schemes for driving the search process into more fruitful regions of the search space. The study involved probabilistic moves and the study of search processes to identify new decision rules for diversification to produce good results. The experimental tests were carried out on 57 MDKP benchmark problems. Optimal results were found for 54 of the problems.

Hanafi and Freville (1998) devised a TS based on strategic oscillation (SO) and surrogate con- straint information that provides a balance between intensification and diversification strategies.

Strategic oscillation is a TS concept involving moves across boundaries that normally do not admit moves, for example crossing the feasible-infeasible boundary. The SO critical solutions considered by Hanafi and Freville are solutions lying on or anywhere near the boundary of the feasible domain.

These critical solutions form what they call the promising zone. The information deduced from the surrogate constraints and memory of the search controls the crossing and intensive exploration of this promising zone. A forward path from the feasible domain to the infeasible region and re- turn paths are helpful in achieving the crossing exploration. These paths are built by performing constructive and destructive phases on the solution. A constructive phase (ADD) adds items to the knapsacks causing a solution trajectory into the infeasible region. A destructive phase (DROP) re- moves items until solution feasibility is restored. An intensification strategy is activated each time a promising zone is reached. The experimental tests were carried out using two sets of MDKP in- stances. The first set from Freville´ and Plateau (1986) involved 54 instances and the second set from

Glover and Kochenberger (2002) involved 24 instances. Optimal solutions were obtained for each of the instances and the previously best known solutions were improved for five of the last seven

21 instances.

Aboudi and Jornsten (1994) applied tabu search for solving MDKP using the Pivot and Com- plement Heuristic by Balas and Martin (1980) as implemented in ZOOM/XMP. The experimental results were tested on 57 standard MDKP problems; optimal solutions were found for 49 out of the

57 problems (86%).

2.6.2 Genetic Algorithm (GA)

Genetic algorithms are search procedures inspired by biology and the workings of natural selection.

Conceived by John Holland and his colleagues, GAs have been applied in many diverse applications

(Chu and Beasley 1998).

The GA name “originates from the analogy between the representation of a complex structure by means of a vector of components, and the idea, familiar to biologists, of the genetic structure of a chromosome” (Reeves 1993). In biology, natural selection reinforces characteristics most amenable to a species survival. Genes within the chromosomes of the stronger members, corresponding to the more desirable characteristics, pass to subsequent generations through a reproduction process.

In a GA generation, genetic operators are applied to selected individuals (solutions) from the current population (of solutions) to create a new population. Three main genetic operators: repro- duction, crossover, and mutation are generally employed. Varying probabilities of applying these operators can control the speed of convergence of the GA. The choice of crossover and mutation op- erators are most important to the performance of the GA. Hence, crossover and mutation operators must be carefully designed.

(i) Reproduction: Reproduction is a process in which a new population is created by copying

individual members, or portions of members, from the present population as a function of

their fitness function values (Goldberg 1989). This fitness function could be some measure

of profit, utility or goodness that we need to maximize. Copying members according to their

22 fitness values means that member solutions with higher value have a higher probability of

contributing to one or more offspring in the next generation. This gives the possibility of sur-

vival for good solutions and an element of immortality to the problem solution or population

member.

(ii) Crossover: New individuals are created as offspring of two parents via the crossover opera-

tion. One or more crossover points are selected at random within the chromosome of each

parent. The parts that are delimited by the crossover points are then interchanged between the

parents. The individuals formed as a result of such an interchange are the offspring (Zalzala

and Fleming 1997). Burst crossover involves crossover at every bit position, thus possibly

resulting in many crossover-points, depending on the selected probability (Hoff et al. 1996).

Common types of crossover are depicted in Figure 2.1 ((Zalzala and Fleming 1997) and

(Renner and Ekart 2003)).

It is better to keep strong members intact in the later phases of a GA. Hence, it is a good

idea to use an adaptively changing crossover rate (higher rates in the early phases and a lower

rate at the end of a GA). In some cases it is also desirable to use several different types of

crossover at different stages of evolution as shown in Figure 2.1 ((Zalzala and Fleming 1997)

and (Renner and Ekart 2003)).

(iii) Mutation: A new individual is created by modifying a selected (existing) individual ((Zalzala

and Fleming 1997) and (Mitchell 1996)). These modifications may be in the form of chang-

ing one or more values in the representation or in adding/deleting parts of the representation.

In GAs, mutation is a source of population diversity and is applied in addition to crossover

and reproduction. Mutation is considered a background operator. Mutation provides a guar-

antee that the probability of obtaining any solution is not zero and acts as a means to recover

good genetic material which may be lost through crossover. In GAs, mutation is randomly

applied with low probability, typically in the range of 0.001 to 0.01 per bit, and modifies

elements in the chromosomes (Zalzala and Fleming 1997). Different kinds of mutation op-

23 erators have been developed. Inverted mutation is mutation per bit, wherein each bit receives

the opposite value of what it had previously (Hoff et al. 1996). For example, if the chromo-

some before mutation is 1001101, the chromosome after inverted mutation may be 0110010.

Other mutation operations include trade mutation (Lucasius and Kateman 1997), whereby the

contribution of individual genes in a chromosome is used to direct mutation towards weaker

terms, and reorder mutation (Lucasius and Kateman 1997), which swaps the positions of bits

or genes to increase diversity in the decision variable space.

Figure 2.1: Different types of crossover operators ((Zalzala and Fleming 1997) and (Renner and Ekart 2003))

A GA begins with a group of solutions called a population. Figure 2.2 (Renner and Ekart 2003) shows a general flowchart of a genetic algorithm.

24 Figure 2.2: Genetic Algorithm Flowchart (Renner and Ekart 2003)

The GA works as follows:

(i) The initial population is comprised of individuals, usually created randomly. The population

obtained may vary in terms of proportion of population members feasible for the specific

problem (some GA approaches guarantee feasibility of all members).

(ii) A fitness measure is used to evaluate each individual in the current population.

(iii) If the termination criterion is satisfied, the best solution or set of solutions, is returned.

(iv) Based on the computed fitness values, individuals are selected from the current population. A

new population is formed by applying genetic operators (reproduction, crossover, mutation)

to these selected individuals. The selected individuals are called parents and the resulting

25 individuals are called offspring. Implementations of GAs differ in construction of each new

population. Some implementations extend the current population by adding new individuals

and then downsize to a new population by omitting the least fit individuals. Other implemen-

tations create a separate population of new individuals by applying genetic operators.

(v) Actions starting from step (ii) are repeated until a termination criterion is satisfied. Each

iteration is called a generation. The GA termination criterion is usually some predefined

number of iterations or evidence that future solutions will not improve. The next generation

is based on selecting parents and offspring for survival again according to some predefined

strategy. Non-selected solutions “die” and are removed from the population.

Hoff et al. (1996) showed how proper selection of parameters and search mechanisms yields a genetic algorithm producing high quality solutions for MDKP. They examined parameters based on recommendations from the literature, and conducted empirical tests to find a best set of parameters.

They also used routines to ensure all solutions were feasible, called filter methods. The experimental tests were carried out on 57 standard test problems for MDKP. The optimum results were found for

56 of the 57 test cases.

Chu and Beasley (1998) incorporated a heuristic operator using problem-specific knowledge into the standard genetic algorithm approach. Binary representation was used to represent the solu- tion. Tournament selection was used to select the parents and uniform crossover and flip mutation were used to generate offspring. The population size of 100 was used with steady state replacement and two bits per child string mutation rate were used. An ADD/DROP heuristic approach was used in order to improve the fitness of a feasible solution and to obtain a feasible solution from an in- feasible solution, respectively. The objective function value was used to calculate the fitness value.

The ADD/DROP operator fixes infeasible chromosomes using Toyoda’s concept of aggregate re- source for knapsack problems. The GA algorithm performance was first tested on the standard test problems. The first set consisted of 55 problems, which were used in earlier research ((Aboudi and Jornsten 1994), (Løkketangen and Glover 1998), (Hoff et al. 1996), and (Løkketangen 1995)).

26 Optimal solutions were found for all 55 test problems. A second test set of 270 large problems was generated randomly. The results obtained indicated that the GA heuristic was very effective for these large problems. The GA was capable of converging to high quality solutions. These 270 problems have since become de facto standard problems for MDKP analyses.

Thiel and Voss (1993) used four different techniques for solving MDKP problems. The first technique used a penalty function approach. They found that the degree to which the fitness of the infeasible solutions was penalized is important. If the penalty function is too restrictive, the GA converged to suboptimal solutions. If the penalty function is too loose, infeasible solutions domi- nate the population. The penalty function they implemented considered three cases and evaluated solutions according to the fitness value and the distance from feasibility. Case 1 was if the solution is feasible and the penalty function value was zero. If the infeasible chromosome value was less than the population mean, it was considered a very poor solution and a fitness value of 1.0 was assigned to it, effectively removing it. If the fitness value of the chromosome is above the average

fitness value, then it is penalized by an infeasibility distance measure.

A second technique implemented used the ADD/DROP operator. The ADD/DROP operator starts with dropping an item and in subsequent steps other items are added while preserving feasi- bility of the solution. Items are selected based on a pseudo-utility measure.

Thiel and Voss’s third technique prevented solutions from becoming infeasible. Their filter operator begins testing whether or not a certain solution is feasible. If a solution is infeasible, random items are removed until a feasible solution is obtained.

Their fourth technique used a tabu operator. The tabu operator is applied to either a randomly determined chromosome string or a chromosome string with the highest fitness of a population. The tabu operator helps in avoiding local optimality and providing improved solutions within the search of the GA. This operator is only applied to feasible strings and hence the filter operator is processed before the tabu operator in order to make the chromosome string feasible.

The results obtained by the penalty function were not satisfying. The ADD/DROP and the

27 filter operator improved the solution performance. An improvement by 0.5% in the solutions was obtained by using the filter and the DROP/ADD operator. The results obtained using the tabu operator proved the best. Optimal solutions were obtained for most of the test problems at the expense of increased computing time using the tabu operator.

Hill and Hiremath (2005) developed and tested a new initial population generation approach using information about the problem structure to solve MDKP using GAs. This approach focused on generating initial populations that are stronger in solution quality, solution diversity, and hover near the boundary of feasible and infeasible solutions within the problem solution space. The new approach developed involved randomly setting more variables to one when the constraints are loose and setting fewer variables to one when the constraints are tight. Computational tests were per- formed on Hill and Reilly’s (2000) test instances. Compared to a traditional random initial popula- tion generation approach, this new approach shifted the ‘best so far’ curve to the left, thus reducing time to reach good solutions. This showed that a better initial population helps GA convergence.

The improved performance was notable for the harder problems with tighter constraints.

2.6.3 Simulated Annealing (SA)

Simulated annealing (SA) is a local search algorithm with a random element. A simple form of local search (a descent algorithm) starts with an initial solution, usually chosen at random. A neighbor of this solution is then generated by some suitable mechanism and the change in cost is calculated.

If the cost is reduced, the generated neighbor replaces the current solution. If the cost is higher, the generated neighbor replaces the current solution with some probability. This probability changes over the course of the algorithm. A high probability causes the SA to function like a random search.

A low probability causes the SA to function like a steepest descent algorithm. The probability varies based on a thermodynamic function involving a temperature parameter. The process is repeated until no further improvement can be found in the neighborhood of the current solution and the descent algorithm terminates at a local minimum (Eglese 1990). The efficiency of SA depends on

28 the neighborhood structure used. The key feature of SA is that it provides a means to escape local optima by allowing hill-climbing moves in hopes of finding a global optimum (Henderson et al.

2003).

Drexl (1988) used a Probabilistic Exchange (PROEXC) to solve the MDKP. SA is a random local search, which allows non-improving moves with probability t called temperature. After r repetitions at temperature t, the temperature is reduced by a factor ϕ and the repetitions are increased by a factor ρ. The author initialized t as,

t = α × β (2.27) where,

α = max{cj|∀j} − min{cj|∀j} (2.28)

The author found that PROEXC works best at β= 0.5 and ϕ= 0.6. It was found that n was a good value for r, and 1.2 was a good value for ρ.

The experimental results of the PROEXC were tested on 57 standard problems with known op- timal solutions. It was found that the PROEXC executed very fast giving good solutions. PROEXC solution quality was found to be superior to a deterministic variant, DETEXC, but the DETEXC was faster in terms of execution time. DETEXC is equivalent to PROEXC except that only changes improving the objective function value are performed, thus making DETEXC effectively a steepest ascent algorithm.

29 3. Extensions and Variants of the Knapsack Problem involving the notion of sets

3.1 Introduction

Knapsack problems find applications in cargo loading, capital budgeting, cutting stock, and resource allocation. However, industrial applications must consider additional constraints such as time win- dows, priority requests, different weights and volumes. Such applications lead to many variations and extensions of the KP. Since these extensions of the basic knapsack model address practical opti- mization problems, these more general variants of the KP have become standard problems (Kellerer et al. 2004). Bounded Knapsack Problems, Multiple Knapsack Problems, Multiple-Choice Knap- sack Problems, and Multiple-choice Multi-dimensional Knapsack Problems are the KP variants discussed in the following sections. Each of these KP variants involve the notion of sets or classes of items under consideration.

30 3.2 Multiple Knapsack Problems (MKP)

3.2.1 MKP Formulation

The Multiple Knapsack Problem is the generalization of the standard knapsack problem from a single knapsack to m knapsacks each with different capacities. Given n items, partition these into m knapsacks with capacities ci, i = {1, ..., m}. Each item j has a profit pj and weight wj. The

objective is to maximize the total profit of the selected items. The MKP is formulated as:

Maximize, m n X X Z = pjxij (3.1) i=1 j=1 subject to,

n X wjxij ≤ ci, i = 1, ..., m (3.2) j=1 m X xij ≤ 1, j = 1, ..., n (3.3) i=1 xij ∈ {0, 1}, i = 1, ..., m, j = 1, ..., n (3.4)

Equation (3.1) provides the total profit of assigning item j to knapsack i (xij = 1). Equa-

tion (3.2) ensures each knapsack constraint is satisfied while Equation (3.3) ensures each item is assigned to at most one knapsack. Finally, Equation (3.4) is the binary selection requirement on decision variables xij such that xij = 1 if selected and xij = 0 if not selected.

3.2.2 Heuristic Solution Approaches for MKP

Hung and Fisk (1978) developed and tested a B & B algorithm based on a depth-first search for solving the MKP. This procedure involved constructing higher levels of the decision tree either by assigning an item to a knapsack or removing an item from a knapsack. The authors used La- grangian and surrogate relaxation techniques to reduce the MKP to a single knapsack problem. The item assigned at every level of the tree depends on the type of the relaxation. The algorithm with

31 Lagrangian relaxation selects an item that appears in the most single knapsack solutions while the algorithm with surrogate relaxation selects an unassigned item with the lowest-index (pj/wj). The algorithm considers knapsacks with increasing capacities and the items are assigned to the knap- sacks in an increasing order of the indices. Testing the algorithms with both types of relaxations showed that for a smaller number of items and boxes, the surrogate relaxation was relatively more efficient in terms of the number of problems reaching the optimum and in terms of computation time. For larger problems, the Lagrangian relaxation was better. On average the solution times for

Lagrangian relaxation were higher than those for surrogate relaxation.

Martello and Toth (1985) developed and tested an algorithm (MTM) using a branch-and-bound approach. Their algorithm exactly solves the problem using a depth-first tree-search. This algorithm was tested on randomly generated test problems. The computational results showed that the time required to find the exact solution increased more rapidly with number of knapsacks (m) than with the number of variables (n), and was impractical for m > 10. When changing the number of backtrackings to 10 and 50, the algorithm yielded solutions close to the optimum with reasonable computational times which increased slowly in both n and m.

Pisinger (1999b) developed an exact algorithm (called MULKNAP) incorporating a branch- and-bound approach to solve the MKP. His recursive branch-and-bound approach applied surrogate relaxation for deriving upper bounds for the problems. Lower bounds were obtained by splitting the surrogate solution into m knapsacks and solving a series of subset-sum problems using dynamic

programming. The dynamic programming approach is also used to tighten the capacity constraints

and obtain better upper bounds. The MULKNAP algorithm outperformed the MTM algorithm.

MULKNAP was the first developed algorithm to solve instances of large size (up to n = 100000)

in less than a second.

Zeisler (2000) developed a greedy heuristic to solve an intratheater airlift problem. For an effective execution of an air operational mission, it is important to coordinate air traffic both to the location and within the location. Given a mixture of aircraft sent to the theater, the intratheater air-

32 lift problem considers the amount of cargo to be moved to a specific location within some specified period of time. Zeisler formulated the problem as a MKP with the objective of maximizing the throughput in a theater given a vehicle mixture and an assignment scheme with an additional con- straint to even the distribution of jobs. His model finds a solution by solving the route assignment for each vehicle at each location. The model allows for a heterogeneous, user defined vehicle mix in a theater with five bed down locations and seven forward operating locations. The model pre- processes the routes by eliminating unattractive routes from the problem and then applies a greedy heuristic to select routes and assign them to the aircrafts at the bed down locations.

3.3 Multiple Choice Knapsack Problems (MCKP)

3.3.1 MCKP Formulation

MCKP is defined as a binary knapsack problem with the addition of disjoint multiple-choice con- straints (Sinha and Zoltners 1979). Given m mutually disjoint classes (N1, ..., Nm) of items, pack an item from each class into a single knapsack of capacity c. Each item j ∈ Ni has a profit pij and a weight wij. The objective of the problem is to maximize the profit of a feasible solution. The

MCKP problem is formulated as:

Maximize, m X X Z = pijxij (3.5) i=1 j∈Ni subject to,

m X X wijxij ≤ c, (3.6) i=1 j∈Ni X xij = 1, i ∈ {1, ..., m} (3.7) j∈Ni

xij ∈ {0, 1}, i ∈ {1, ..., m}, j ∈ Ni (3.8)

Equation (3.5) calculates the profit of an assignment of items, a value to be maximized. Equa- tion (3.6) ensures the single knapsack capacity is not exceeded while Equation (3.7) ensures select-

33 ing a single item from each of the m disjoint subsets of items. Equation (3.8) is the binary selection requirement on decision variable xij such that xij = 1 if selected and xij = 0 if not selected.

3.3.2 Heuristic Solution Approaches for MCKP

Sinha and Zoltners (1979) present a B & B algorithm for MCKP. The candidate problem at every branch is a MCKP. Linear Programming (LP) relaxations of the subproblems are solved for fath- oming and for providing directions for branching. The strength of the algorithm lies in obtaining a quick solution to the LP relaxation of the MCKP and its efficient subsequent re-optimization as a result of branching. The algorithm was tested on problems generated randomly with coefficients from a uniform distribution. Computational results indicated that the solution times increased faster with the number of classes than with the number of variables within each class.

Armstrong et al. (1983) developed a B & B algorithm to solve the MCKP. At each stage their linear relaxation was solved by one of two algorithms depending on the initialization rules: least- cost and most-cost. They developed and investigated various versions of the algorithm examining differences in data list structures, sorting techniques, and fathoming criteria. The algorithm devel- oped eliminated considerable storage space from an earlier implementation by Sinha and Zoltners

(1979) with no effect on the execution time. Armstrong et al. (1983) determined that using a heap sort algorithm reduced the computation time while solving larger problems.

Pisinger (1995) presented a partitioning algorithm to derive an optimal linear programming solution, and showed how this can be incorporated into a dynamic programming algorithm so that a minimal number of classes are enumerated, sorted and reduced. This partitioning algorithm is used for solving the Linear Multiple Choice Knapsack Problem (LMCKP) and an initial feasible solution is obtained for MCKP. Starting from this initial solution, dynamic programming is used to solve the

MCKP by adding new classes to the core problem as needed. This technique uses a minimal number of classes to solve MCKP to optimality. Pisinger considered problem sizes, the test instances, and the data ranges for testing his algorithm. His algorithm outperforms any known algorithm for the

34 MCKP.

Pisinger (2001) has also modeled the budgeting problem with bounded multiple-choice con- straints (BBMC) as a generalization of the MCKP. Pisinger discusses the transformation of BBMC to an equivalent MCKP using dynamic programming which leads to an increase in the number of variables, and hence proposes Dantzig-Wolfe decomposition. MCKP is solved with some extreme points in each of the transformed classes. A slope is found by determining extreme items for every class using Equations (3.10) and (3.11).

Mi(α) = arg max pij − αwij, i = 1, ..., m (3.9) j∈Ni

ai = arg min wij, i = 1, ..., m (3.10) ∗ j∈Mi(α )

bi = arg max wij, i = 1, ..., m (3.11) ∗ j∈Mi(α )

∗ For every class Ni, Mi(α ) is determined where Mi(α) is the set of extreme items from the

∗ class i in the negative direction of the slope. α is the optimal slope. Mi(α) is found using Equa-

tion (3.9). For every class i, Mi(α) may contain more than one item. Let ai and bi be the item with smallest and largest weight in the set Mi given by Equations (3.10) and (3.11), respectively. Select m X an item ai from every class Ni and compute the residual knapsack capacity as c − wiai . Ex- i=1 change item ai with bi by selecting a class arbitrarily and repeat this until the new residual capacity m m X X c + wiai − wibi < 0 for some class i. Set xbi = c/(wbi − wai ) and xai = 1 − xbi . This is a i=1 i=1 LP-optimal solution with ai and bi being extreme items.

Pisinger’s algorithm is based on dynamic programming, where the search focuses on a small number of classes with a greater probability of finding an optimal solution.

Kozanidis et al. (2005) introduced a variation of the LMCKP incorporating equity constraints

(LMCKPE). They present a mathematical formulation of the LMCKPE and develop an optimal two phase greedy algorithm. Phase I of the algorithm enhances an existing method for LCMKP to obtain an optimal solution. This solution acts as an initial solution for Phase II which first violates

35 the equity constraints and then tries to reach feasibility while maintaining optimality. The algorithm terminates upon satisfying the equity constraints. Their greedy algorithm was tested on randomly generated test problems. Computational results showed that the algorithm took less time than the commercial linear programming package to solve a problem. The time savings increased as the problem size increased.

3.4 Multiple-choice Multi-dimensional Knapsack Problems (MMKP)

3.4.1 MMKP Formulation

The MMKP combines aspects of the Multidimensional Knapsack Problem (MDKP) and the Multiple-

Choice Knapsack Problem (MCKP). In the MDKP, the resources are multidimensional, i.e., the knapsack consists of multiple resource constraints simultaneously satisfied. As in the MCKP, one item is selected from each class.

Suppose there are m classes of items, where each class has ni items and there are l resource

constraints. Each item j of class i has the non-negative profit value pij, and requires resources given by the weight vector wij = (wij1, wij2, ..., wijl). The amount of available resources are given by a

vector c = (c1, c2, ..., cl). The variable xij is either equal to 0 or 1, implying item j is not picked

or is picked for the ith class, respectively. The MMKP selects exactly one item from every class to

maximize the total profit, subject to resource constraints. The MMKP problem is formulated as:

Maximize, m n X Xi Z = pijxij (3.12) i=1 j=1

36 subject to,

m n X Xi wijkxij ≤ ck, k ∈ {1, ..., l} (3.13) i=1 j=1 n Xi xij = 1, i ∈ {1, ..., m} (3.14) j=1

xij ∈ {0, 1}, i ∈ {1, ..., m}, j ∈ {1, ..., ni} (3.15)

Equation (3.12) provides the profit of selecting an item from every class, a value to be max- imized. Equation (3.13) ensures the resource capacity of knapsack k is not exceeded while Equa- tion (3.14) ensures selecting a single item from each of the i classes. Equation (3.15) is the binary selection requirement on decision variable xij such that xij = 1 if selected and xij = 0 if not

selected.

Since the MMKP is a NP-hard problem, the computation time to reach the optimal solution

increases exponentially with the size of the problem (Khan et al. 2002). Such problems can be solved either to optimality or near-optimality. Near-optimal solutions are often sufficient but take less computation time than the time taken to reach the exact optimal solution. For real-time decision making applications, such as in quality adaptation and admission control of interactive multimedia systems (Chen et al. 1999), or service level agreement management in telecommunication networks

(Watson, 2001 (as referenced in Hifi et al. (2004)), algorithms for finding exact solutions for the

MMKP are not suitable due to the complexity of the problem and the need for rapid system response.

3.4.2 Heuristic Solution Approaches for MMKP

Moser et al. (1997) developed a heuristic algorithm with the method to solve the

MMKP. The algorithm starts with the most valuable item of each class and the Lagrange multipliers are initialized to zero. The initial choice of selected items is adapted to consider the constraints and repeatedly improve with respect to the most violated constraint. The item exchanged is found by computing the increase of the Lagrange multiplier for all non-selected items in every class relative to the selected item of the class. Eventually, the item causing the least increase in the multiplier

37 is chosen for exchange and the multipliers are reevaluated with this least incremental value. The process is repeated until an item from every class is selected or no more replacements are possible.

Once the process is complete, there may be space in the knapsack to further improve the solution by replacing some of the selected items with more valuable items. In this last phase, every item is checked to determine if it is more valuable than the selected item and does it satisfy the resource constraints. For all the exchangeable items, the item that causes the largest increase in the knapsack value is exchanged with the selected exchangable item. This process is repeated until no more exchanges are possible. On testing and comparing their heuristic against an exact algorithm using the dynamic programming approach, the Moser et al. (1997) approach yielded results in less time than the exact algorithm.

Akbar et al. (2001) present two heuristic algorithms for solving the MMKP. Their initial heuris- tic, (M-HEU), uses both improving and non-improving item selections. The authors then extended

M-HEU to an incremental method, I-HEU, for a real-time system when the number of classes in the

MMKP is very large. The proposed I-HEU solves the MMKP incrementally, starting with a MMKP solution based on a smaller number of classes. The test results show that the I-HEU offers similar results as M-HEU but in less time. The scalability property of I-HEU makes it a strong candidate for real-time applications like the adaptive multimedia systems.

Khan et al. (2002) solve the Adaptive Multimedia Problem (AMP) with multiple concurrent sessions, where the qualities of the individual sessions are dynamically adapted to the available resources and user preferences. A multimedia session consists of three media: audio, video, and still image. Different combinations of these three media leads to various session quality profiles. A session’s quality profile is equivalent to a stack of items. The m session’s quality profiles map to m classes of items. Thus, item j of class i denotes operating quality of session i. The corresponding session utility is the profit value denoted as pij and the amount of resource k consumed by session i is denoted as wijk. The resource constraints of the AMP are equivalent to the resource constraints of the MMKP. Finding an operating quality for each session in the AMP is equivalent to selecting one item from every class. The objective of the AMP is to maximize the system utility subject to

38 the system resource constraints.

The Khan et al. (2002) heuristic approach (HEU) uses Toyoda’s (1975) concept of an aggregate resource for selecting items for the knapsack. They improve the solution using an item exchange approach. HEU was tested and compared to branch-and-bound search using linear programming

(BBLP) and Moser’s heuristic (MOSER) which is based on Lagrange relaxation. The results ob- tained are within 6% of the optimal value obtained by the BBLP approach. HEU out-performed

Moser’s heuristic both in the solution-value quality and computation time.

Hifi et al. (2004) propose several heuristic algorithms for the MMKP. The first two approaches are a constructive and complementary solution approach. The constructive procedure (CP) is a greedy procedure involving a DROP and an ADD phase to generate new feasible solutions from current feasible solutions. In the complementary CP (CCP) approach, they use an iterative improve- ment of the initial feasible solution. The CCP uses a swapping strategy; pick items and replace the item with another item selected from the same class. The algorithms were tested on Khan’s (2002) benchmark problems with results indicating the CCP performed better than the CP at slightly longer computational time. The Hifi et al. (2004) third approach is the Guided Local Search (GLS) method.

The GLS uses memory to guide the search to promising regions of the solution space by increasing the cost function with a penalty term applied to bad features of solutions previously visited. The authors introduce a new principle based on (a) starting the search with a lower bound obtained by a fast greedy heuristic, (b) improving the quality of the initial solution using the complementary procedure, and (c) searching for the best feasible solution over a set of neighborhoods. The GLS algorithm is a two stage procedure; the first stage penalizes the current solution while the second stage normalizes and improves the quality of the solution obtained by the first stage.

Lau and Lim (2004) propose and compare two metaheuristic approaches to obtain near-optimal solutions for the logistic scheduling problem, called Available-to-Promise, modeled as a Multi-

Period Multidimensional Knapsack Problem (MPMKP). The meta-heuristic approaches used to solve the MPMKP are tabu search (TS) and ant colony optimization (ACO) algorithms. The authors

39 use a greedy heuristic approach to find a feasible solution for each metaheuristic procedure. Their computational results on benchmark problems show that both approaches reached near-optimal so- lutions quickly and were within 6% deviation of the upper bound. Computationally, the ACO ran faster than the TS.

Lin and Wu (2004) examine the properties of the Multiple-Choice Multi-Period Knapsack

Problem (MCMPKP) based on Armstrong et al. (1982) simple dominance and convex dominance for the MCKP. The simple dominance property states that, if two items j and k from class Ni satisfy wij ≤ wik and pij ≥ pik, i = 1, ..., m, then item j dominates item k. The complex dominance property applies, if three items j, k, and l from class Ni with wij < wik < wil and pij < pik < pil satisfy the condition (pil −pik)/(wil −wik) ≥ (pik −pij)/(wik −wij), then item k is LP-dominated by items j and l.

Lin and Wu (2004) develop a heuristic approach incorporating primal and dual gradient meth- ods to obtain a strong lower bound. Further, two branch-and-bound procedures are developed for locating the optimal solution for the MCMPKP using the lower bound as an initial solution. The

first branch-and-bound procedure solves the MCMPKP using a special algorithm incorporating a concept from the nested knapsack problem while the second procedure uses the generalized upper bounding technique through the concept of multiple-choice programming. Armstrong et al. (1982) define the multiple-choice nested knapsack problem (NK) as a generalized form of the multiple- choice knapsack problem in which several resource constraints are nested across the multiple choice classes. The NK with l resources is formulated similar to MCKP represented by equations (3.5) through (3.8), except equation (3.6) is modified for several resource constraints as in equation (3.16).

m X X wijxij ≤ ck, k ∈ {1, ..., l} (3.16) i=1 j∈Ni

The test results of the heuristic approach developed by Lin and Wu (2004) indicated that the lower bound for tightly structured MCMPKP is better than that for the loosely structured MCMPKP.

The computational results obtained by incorporating the lower bound in the branch-and-bound pro-

40 cedures showed that the special algorithm performed better over the multiple-choice programming technique for the smaller sized and loosely structured MCMPKP.

Parra-Hernandez and Dimopoulos (2005) developed the Hmmkp algorithm for solving the

MMKP. In Hmmkp, the MMKP is first reduced to MDKP. A linear programming relaxation of the resulting MDKP is solved. Pseudo-utility and the resource value coefficients are computed and used to obtain a feasible solution for the original MMKP. Hmmkp was tested and compared against the heuristic techniques developed by Akbar et al. (2001) and found to provide better solutions compared to the other techniques. The computation time for Hmmkp was higher than the Akbar

et al. (2001) heuristics as the LP relaxation employed approximately 95% of the total time.

Akbar et al. (2006) transformed the MMKP. They reduce the multidimensional resource con- straints to a single constraint by using a penalty vector to replace the constraints with a single representative constraint. They present a heuristic (C-HEU) by constructing convex hulls to reduce the search space to find a near-optimal solution to the MMKP. They compare the results of solving the MMKP using C-HEU to MOSER (Moser et al. 1997), M-HEU (Akbar et al. 2001), branch- and-bound search using linear programming, and a greedy approach based on a linear search of all items of each group picking the highest-valued feasible item from each group (G-HEU). They tested and compared these heuristics on random and correlated problem sets. The results indicated that

C-HEU yielded better results for uncorrelated data instances and had a quicker response time than

MOSER and M-HEU. The solution obtained by C-HEU was between 88% and 98% of optimality.

Li et al. (2004) model the strike force asset allocation problem as a problem containing multiple knapsack constraints and multiple choice constraints. This problem groups the strike force assets into packages and assigns these packages to targets and defensive assets while maximizing the total strike force potential. Li et al. (2004) develop two models considering the strike force assets to be assigned to targets and to defensive assets, respectively. The first model considers the strike force asset allocation problem without suppression using asset classes and targets. Each target is assigned at most one attack package to maximize the total damage, subject to capacity constraints

41 of all asset classes. The second model considers the strike force asset allocation problem with suppression by dividing the targets into targets and threats. Integer programming formulations were developed using CPLEX MIP Parallel Solver and tested using randomly generated test problems by varying the number of asset classes, targets, and threats. The results indicated that the model without suppression solved most of the problems to optimality.

Li and Curry (2005) solve the multidimensional knapsack problem with generalized upper bounds (GUBMDKP) using a critical event tabu search. This problem is similar to the MMKP except the equality is replaced by an inequality in the multiple-choice constraints. This critical event tabu search has a flexible memory structure which is updated at critical events. The critical events are generated by the solutions at the constructive phase before the solution becomes infeasible and at the beginning of the destructive phase when the solution becomes feasible. The choice rules based on Lagrangian relaxation and surrogate constraint information decide the variables to be added, dropped, or swapped. The intensification and diversification is achieved by a strategic oscillation scheme visiting each side of the feasibility boundary. The trial solutions for the critical events are obtained by a local search improvement method. The three test problem instances from

Li et al. (2004) were used to generate new test problem instances by incrementing the number of available variables. The computational results of the critical event tabu search on these test problems showed that the surrogate constraint choice rules achieved better solutions than the Lagrangian relaxation choice rules. The intensification in the trial solution approach reduced the average gap of the problems.

Li (2005) extends the heuristic of Li and Curry (2005) by introducing a tight-oscillation process to intensify the search when the critical event tabu search approach finds trial solutions near the feasibility boundary. The trial solution approach in the critical event tabu search considered only the change in the objective function value while the tight-oscillation considers the change in the objective function value and the feasibility change in the choice rules. When comparing the two heuristics, the results showed that there is an improvement in the solution quality for the tight- oscillation process.

42 Dawande et al. (2000) studied the multiple knapsack problem with assignment restrictions

(MKPAR) and discussed approximate greedy algorithms to solve the problem. This problem con- sidered a set of items with positive weights and a set of knapsacks with capacities, and a set of knapsacks that can hold each item while assigning each item to at most one knapsack and satisfying the knapsack capacities. The objective of the problem was to maximize the total assigned weight and minimize the utilized capacity. The first greedy approach successively solves m single knapsack problems while the second algorithm is based on rounding the LP relaxation solution.

3.5 Applications and Formulations of the MMKP-type prob- lems

Basnet and Wilson (2005) discuss an extension of the classical knapsack problem where the objec- tive is to minimize the number of warehouses needed to store items in a supply chain considering space requirement and item storage compatibilities. The problem considered has i items to be stored in j warehouses. Each item i has a space requirement, Si, such that Si ≤ A where A denotes the space available in each warehouse. Every warehouse j has a decision variable zj set to 1 when the warehouse has one or more items or 0 otherwise. xij is a binary variable for item i stored in ware-

house j. cik is a binary parameter for the compatibility of items i and k. The problem is formulated

as:

Minimize, n X Z = zj (3.17) j=1

43 subject to,

n X Sixij ≤ Azj, j = 1, ..., n (3.18) i=1

xij + xkj ≤ cik + 1, i = 1, ..., n, k = i + 1, ..., n, j = 1, ..., n (3.19)

zj+1 ≤ zj, j = 1, ..., n − 1 (3.20) n X xij = 1, i = 1, ..., n (3.21) j=1 ( 1, if xij and xkj compatible cik = (3.22) 0, if xij and xkj incompatible

Equation (3.17) minimizes the number of warehouses. Equation (3.18) ensures warehouses are not overfilled. Equation (3.19) ensures incompatible items are not stored together. Equation (3.20) ensures that the lower indexed warehouses are filled before higher indexed ones. Equation (3.21) ensures that each item must be stored in one particular warehouse.

The authors examined heuristics and tested them on randomly generated problem sets. The test problems generated considered the number of items, the space required for each item, and the potential incompatibility between items.

Lardeux et al. (2003) formulated a sub-problem of 2-layered network design (2-LND) in the form of MMKP which they refer to as MCMK. The aim is to install capacities on links of the two layers of a network for a minimal global cost. The first layer needs enough capacity to route all traffic demand and these capacities are considered as traffic demand for a second layer. The authors modeled the problem as a 0-1 LP which is close to the generalized MCMK problem with additional constraints.

Lardeux et al. (2003) formulated the problem by considering a traffic demand vector d, a capacity vector a on the first layer edges and a capacity vector b on the second layer edges. For each link e, xe and ye are the capacity of the link for the first and second layers, respectively. For each

0 1 p(e) edge e = (ij) ∈ E, Ve = {ve , ve , ..., ve } is the set of discrete values available for the capacity

0 1 p(e) xe on link e, and γe , γe , ..., γe are the cost values on link e of the first layer. Similarly We =

44 0 1 p(e) ˆ 0 1 p(e) {we , we , ..., we } is the set of values in an increasing order for ye ∈ E with costs (δe , δe , ..., δe ).

t t th xe and ye are binary variables corresponding to t value of available capacity on link e of layers

1 and 2, respectively. L1 and L2 are the set of indices of valid metric inequalities defining all the

feasible flows. Lardeux et al. (2003) formulate the 2-LND as:

Minimize, p(e) q(e) X X t t X X t t Z = γexe + δeye (3.23) e∈E t=0 e∈Eˆ t=0 subject to,

p(e) X l X t t X l λij vexe ≥ λijdij, ∀l ∈ L1 (3.24) ij∈E t=0 i

t ˆ ye ∈ {0, 1}, ∀e ∈ E, ∀t ∈ {1, ..., q (e)} (3.29)

Equation (3.23) calculates the total cost of installing capacities on the links, a value to be minimized. Equations (3.24) and (3.25) ensure that the resource capacity of the two layers is not exceeded. Equations (3.26) and (3.27) ensure the selection of only one variable from every class and

Equations (3.28) and (3.29) are the binary selection requirements on the decision variables. In this formulation, some of the components of the constraint left-hand side matrix take negative values which make it difficult to solve this problem with any standard MCMK algorithm. To overcome this problem, the authors suggest generating new capacity constraints.

Lardeux et al. (2003) developed a greedy algorithm combined with a global search process

(called it ApproachMCMK), to design several quite small but difficult 2-layered networks. The per- formance of the ApproachMCMK was compared with two other methods they developed; OptMIP

45 (optimal design method obtained by solving 0-1 LP with MIP exclusively) and OptMCMK (optimal method substituting MIP for the MCMK heuristic in the last iterations). OptMIP and OptMCMK yield optimal solutions. The number of instances, the number of nodes, the number of links in the

first layer, the number of links in the second layer, the number of binary variables and the number of non-negative demands were the problem parameters they employed in the problems they solved.

Feng (2001) formulated the resource allocation in Ka-band satellite as a binary integer program related to the multiple choice multiple knapsack problem (MCMKP) with the objective to maximize the aggregate priority of packets arriving at all downlink spots. His formulation is:

Maximize, m n l X X X Z = pijxijk (3.30) i=1 j=1 k=1 subject to,

m n X X xijkwij ≤ c, k = 1, 2, ..., l (3.31) i=1 j=1 m n X X xijk ≤ N, k = 1, 2, ..., l (3.32) i=1 j=1 l n X X xijk = 1, i = 1, 2, ..., m (3.33) k=1 j=1

xijk ∈ {0, 1}, i = 1, 2, ..., m, j = 1, 2, ..., n, (3.34)

k = 1, 2, ..., l where l, m, n, and N are the numbers of bursts in a round, downlink spots, transmission power

levels, and antennas, respectively. wij is the transmission power of level j for downlink spot i, pij

denotes the corresponding priority, and c is the total available power for each burst. The problem

is to choose exactly one item from each class to pack in l knapsacks, such that the profit sum is

maximized without exceeding any knapsack’s capacity. Equation (3.30) calculates the total profit, a value to be maximized. Equation (3.31) ensures that the system power is enough to serve the se- lected spots with their respective power in each burst. The limited number of antenna is represented in Equation (3.32). Equation (3.33) serves every downlink spot uniquely in a round while Equa-

46 tion (3.34) is the binary selection requirement on the decision variable xijk. Since the MCMKP could require long computation time to obtain an optimal solution, Feng decomposed the original problem into a sequence of l MCKPs.

Chocolaad (1998) modeled the airlift loading problem of selecting a set of cargo for packing a given fleet of aircraft as a geometric knapsack problem. The formulation accounts for the shape of each item and the shape and of the containers along with the weight and volume con- straints of the aircraft. Chocolaad’s model considered a single knapsack of C-17 aircrafts. Romaine

(1999) extends Chocolaad’s research in two areas: a multiple knapsack rather than single knapsack and heterogeneous aircraft instead of homogeneous. Romaine modeled the aircraft load scheduling problem as a multidimensional multiple knapsack problem (MMDKP) with packing constraints. He models each aircraft as a knapsack with multiple constraints (weight and volume), and packing a

fleet of planes as a MMDKP (space conflicts, weight and balance, hazardous cargo, floor loading).

Given n items to be packed in m aircraft. Every item has a profit pj, weight wj, and volume vj associated with it. Every aircraft has a maximum weight ci that it can hold, and volume vi. The problem is to select j items to be packed into i aircraft to maximize the total profit of the selected items. This part of the problem is formulated as:

Maximize, m n X X Z = pjxij (3.35) i=1 j=1 subject to,

n X wjxij ≤ ci, i = {1, ..., m} (3.36) j=1 n X vjxij ≤ vi, i = {1, ..., m} (3.37) j=1 m X xij ≤ 1, j = {1, ..., n} (3.38) i=1 xij ∈ {0, 1}, i = {1, ..., m}, j = {1, ..., n} (3.39)

Equation (3.35) provides the total profit of packing item j in aircraft i. Equations (3.36)

47 and (3.37) ensure the weight and volume constraints of each aircraft are not violated while Equa- tion (3.38) ensures each item is assigned to at most one aircraft. Finally, Equation (3.39) is the binary selection requirement on decision variable xij such that xij = 1 if selected and xij = 0 if not selected.

The packing constraints considered in Chocolaad’s airlift loading problem are:

Maximize, n X Z = pjxj (3.40) j=1 subject to,

s(xα) ∩ s(xβ) ≡ φ, ∀α 6= β, (3.41)

where α, β ∈ j = {1, ..., n}

s(xj) ⊆ Stotal, ∀j ∈ {1, ..., n} (3.42)

xj ∈ {0, 1}, j = {1, ..., n} (3.43)

Stotal is the space bounding the container volume. s(xα) and s(xβ) are the space of the α and

β items, respectively. pj is the profit associated with item j. Equation (3.40) calculates the total profit by selecting item j. Equation (3.41) reflects the requirement that no two items overlap or occupy the same space. Equation (3.42) enforces the requirement that all the items be placed within the allotted area while Equation (3.43) is the decision variable for whether item j is selected or not.

Due to the computational time associated with using conventional algorithms, Chocolaad (1998) and Romaine (1999) use tabu search to solve the MMDKP.

Raidl (1999) presented a genetic algorithm (GA) approach for solving the multiple container packing problem (MCPP). The problem considers packing n items into m containers of equal ca- pacity c. Each item has a profit pj and size wj associated with it. The problem is to select m disjoint subsets of items, such that each subset fits into a container and the total value of the selected items is maximized. The MCPP is formulated as a variant of the multiple knapsack problem as:

48 Maximize, m n X X Z = pjxij (3.44) i=1 j=1 subject to,

n X wjxij ≤ c, i = 1, ..., m (3.45) j=1 m X xij ≤ 1, j = 1, ..., n (3.46) i=1 wj ≤ c, j = 1, ..., n (3.47) n X wj ≥ c, j = 1, ..., n (3.48) j=1

xij ∈ {0, 1}, i = 1, ..., m, j = 1, ..., n (3.49)

wj > 0, pj > 0, c > 0, j = 1, ..., n

Equation (3.44) calculates the total value of the selected items, a value to be maximized. Equa- tion (3.45) ensures that the capacity of the containers is not exceeded while Equation (3.46) takes into account that every item is packed into a single container only. Equations (3.47) and (3.48) avoid trivial cases. Equation (3.47) ensures that each item j fits into a container or otherwise it may be removed from the problem and Equation (3.48) avoids all items fitting into one of the containers.

Equation (3.49) represents the binary decision variable xij such that xij = 1 if selected and xij = 0

if not selected. The MCPP is a MKP with Ci = c, for all i = 1, ..., m.

49 4. Legacy Heuristics and Test Problems Analysis

4.1 Introduction

This chapter discusses in detail the legacy heuristic procedures developed for various KP variants and performs an analysis of the test problems generated for testing solution procedures for the KP variants. Section 4.2 provides a detailed stepwise outline of heuristic approaches developed for solving the Multiple Knapsack Problems (MKP), Multiple Choice Knapsack Problems (MCKP), and Multiple-choice Multidimensional Knapsack Problems (MMKP). Specifically studied are the heuristic methods proposed by Hung and Fisk (HUNGFISK, 1978), Martello and Toth (MTM,

1985), and Pisinger (PISINGERMKP, 1999b) for the MKP; Zemel (ZEMEL, 1984) and Dyer et al.

(DYERKAYALWALKER, 1984) for the MCKP; Moser et al. (MOSER, 1997), Khan et al. (HEU,

2002), Hifi et al. (CPCCP and DerAlgo, 2004) and Akbar et al. (M-HEU and I-HEU, 2001) for the

MMKP. Section 4.3 examines characteristics of test problems generated for KP variants. Finally,

Section 4.4 examines the structure and the characteristics of the various test problems.

This chapter does not examine in detail the heuristics and test problems for the Multi-Dimensional

Knapsack Problems (MDKP). Details of the MDKP heuristics and test problem generation ap- proaches are detailed in Cho (2005).

50 4.2 Legacy Heuristics

4.2.1 Legacy Heuristics for Multiple Knapsack Problems (MKP)

Hung and Fisk (1978) Heuristic (HUNGFISK)

Hung and Fisk (1978) developed a algorithm to solve the MKP by assigning an item to a knapsack or by excluding that item from all knapsacks. Let F be the index set of the items not assigned to any knapsack and S be the index set of the items assigned to the knapsacks.

F = ∅ means a feasible solution is found and the corresponding objective function value, z, can be

compared to the incumbent solution z∗. S ∪ F = N and S ∩ F = ∅, where N is the index set of all

items.

An overview of the HUNGFISK approach follows:

Step 1: Initialize z∗ = −∞, S = ∅, F = N, tree-level index k = 1.

Step 2: (Bounding) Solve the Lagrangian or Surrogate relaxation of the MKP with respect to F .

The Lagrangian relaxation of the MKP is given by equations (4.1) through (4.3). Equations (4.4) through (4.7) represent the surrogate relaxation of the MKP.

Maximize, m n n m X X X X Z = pjxij − λj( xij − 1) (4.1) i=1 j=1 j=1 i=1 subject to,

n X wjxij ≤ ci, i = 1, ..., m (4.2) j=1

xij ∈ {0, 1}, i = 1, ..., m, j = 1, ..., n (4.3)

Maximize, m n X X Z = pjxij (4.4) i=1 j=1

51 subject to,

m n m X X X πiwjxij ≤ πici (4.5) i=1 j=1 i=1 m X xij ≤ 1, j = 1, ..., n (4.6) i=1 xij ∈ {0, 1}, i = 1, ..., m, j = 1, ..., n (4.7)

The Lagrange (λj) and surrogate (πi) multipliers for constraints (3.2) and (3.3) of the MKP are defined by Equations (4.8) and (4.9), respectively.

( pj − wj(pt/wt), if j < t λj = (4.8) 0, if j ≥ t and

πi = pt/wt, i = 1, ..., m (4.9)

∗ Compute the objective value z¯k using the objective function of the original MKP. If z¯k ≤ z , go to step 5. If the solution is feasible to the original MKP, go to step 4, else go to step 3.

Step 3: (Branching) Select item j and assign it to one of the knapsacks. The branching item selected depends on the type of the relaxation employed in Step 2. For the Lagrangian relaxation, select the object in F which appears in most single-knapsack solutions used in calculating Equation (4.1).

In case of ties the lowest-indexed item is selected. For the surrogate relaxation, the item with the lowest-index in F is selected. Record the value of the items assigned to the knapsacks. If all items are assigned, go to step 4, else update S, F , and k and go to step 3.

Step 4: Update z∗, its corresponding solution, and go to step 5.

Step 5: (Backtracking) Find smallest tree-level k such that z¯k0 . Denote the level preceding k0 as k−1 = k0 − 1 and corresponding object index in S as jk−1 . If k−1 ≤ 0, terminate the procedure,

52 else set k = k−1 and free all item indices in S following jk−1 . Assign item jk−1 to a knapsack that

is different from the knapsacks to which it had previously been assigned. Go to step 2. If jk−1 have

been excluded previously, set z¯k−1 = −∞ and go to step 5.

Martello and Toth (1985) Heuristic (MTM)

Martello and Toth (1985) developed an algorithm for the MKP using an enumerative scheme where each node of the decision tree generates two branches, one that assigns an item to a knapsack and one that excludes an item from a knapsack. They denote Si(i = 1, ..., m) as the stack of items currently assigned to knapsack i and S = {S1,S2, ..., Sm}. Each iteration, the MTM algorithm

inserts an item j selected for knapsack i. At the end of each iteration, knapsacks 1, ..., i − 1 are

completely loaded, knapsack i is partially loaded, while knapsacks i + 1, ..., m are empty.

Upper bounds are computed using surrogate relaxation and lower bounds are computed via a

heuristic approach. This heuristic procedure finds an optimal solution for the first knapsack, then

excluding items included in the first knapsack, finds an optimal solution for the second knapsack.

This procedure continues until a constrained optimal solution for all the knapsacks is found. x¯ij(i =

1, ..., m; j = 1, ..., n) is the current solution.

An overview of the MTM algorithm follows:

∗ Step 1: Initialize z = 0, Si = ∅ for i = 1, ..., m, i = 1. Compute upper bound (U).

Step 2: Compute lower bound (L) and the heuristic solution x¯. If L ≤ z∗, go to step 3, set z∗ = L

and x∗ =x ¯. If L = U, go to step 4.

Step 3: (Updating) Let j = min{u : u∈ / Si and x¯iu = 1}. If no such j exists, set i = i + 1. If

i < m, repeat step 3, otherwise set i = m − 1 and go to step 4.

Insert j in Si, set xij = 1, compute upper bound U.

Step 4: (Backtracking) Let j= last item inserted in Si such that xij = 1. If no such j exists, set

Si = ∅, i = i − 1. If i = 0 stop, otherwise repeat step 4.

Set xij = 0, remove from Si all items inserted after j. Compute upper bound (U).

53 If U ≤ z∗, repeat step 4. Otherwise go to step 2.

The MTM algorithm checks for the trivial cases defined in Equations (4.10) through (4.12).

Equation (4.10) ensures each item j fits into at least one knapsack, otherwise it may be removed from the problem. If Equation (4.11) is violated, the smallest knapsack may be discounted and not considered for computation since no items fit into it. Equation (4.12) avoids a trivial case where all items fit into the largest knapsack. This algorithm assumes that the items are arranged in decreasing order of profit per unit weight, and the knapsacks are arranged by increasing order of capacities.

max wj ≤ max ci (4.10) j=1,...,n i=1,...,m

min ci ≥ min wj (4.11) i=1,...,m j=1,...,n n X wj ≥ max ci (4.12) i=1,...,m j=1

Pisinger (1999b) Heuristic (PISINGERMKP)

The pseudocode for the recursive branch-and-bound algorithm developed by Pisinger (1999b) is as follows:

P and W are the profit and weight sums of the items assigned to a knapsack, respectively, x¯ is the solution obtained by solving the surrogate relaxation of the MKP which is split into m knapsacks while solving a series of subset-sum problems, yij = 0 or 1 indicates an item j assigned to knapsack i.

∗ Step 1: For i = 1, ..., m and j = 1, ..., n, initialize dj = 1, xij = 0, yij = 0, z = 0. Order the knapsacks by decreasing order of their capacities c1, c2, ..., cm. Go to step 2.

Step 2: Tighten the knapsack capacities ci by solving m subset-sum problems on items h + 1, ..., n.

Solve the surrogate relaxation of the MKP with x¯ as the solution and u as the objective value. Go

to step 3.

Step 3: If (P + u > z∗), then sort the items given by x¯ according to non-increasing weights. Split

the solution x¯ into m knapsacks by solving a series of subset-sum problems on items j with x¯j = 1.

54 Let yij be the optimal filling of ci with a corresponding profit sum zi. Improve the heuristic solution

for i knapsacks not completely filled. Go to step 4. m m X ∗ ∗ X Step 4: If (P + zi > z ), then copy y to x and set z = P + zi. Go to step 5. i=1 i=1 Step 5: If (P + u > z∗), then reduce the items by upper bound tests and swap the reduced items to the first positions, increasing h. Let i be the smallest knapsack with ci > 0. Solve this problem as a KP with c = ci defined on the free variables. Let x¯ be the solution vector. Choose an item with largest profit-weight ratio among x¯j = 1. Denote this item as l. Swap item l to position h + 1 and set j = h + 1. Assign item j to knapsack i (yij = 1). Recursively repeat step 2. Go to step 6.

Step 6: Exclude item j from knapsack i (yij = 0). Set d¯ = dj and dj = i + 1. Recursively repeat step 2. Find item j again and set dj = d¯.

4.2.2 Legacy Heuristics for Multiple Choice Knapsack Problems (MCKP)

The concept of dominance plays an important role in solving the MCKP by deleting those items from the classes that will never be chosen in an optimal solution. Two criteria of dominance as referenced by Sinha and Zoltners (1979) are:

Dominance Criteria 1

If two items j and k from class Ni satisfy the conditions (4.13) and (4.14), then item j domi- nates item k.

wij ≤ wik, i = 1, ..., m (4.13)

pij ≥ pik (4.14)

This criteria applies when some item j is more valuable and takes less resource than another item k in class i.

Dominance Criteria 2

55 If three items j, k, and l from class Ni with wij < wik < wil and pij < pik < pil satisfy the

condition (4.15), then item k is LP-dominated by items j and l.

(pil − pik)/(wil − wik) ≥ (pik − pij)/(wik − wij) (4.15)

Zemel (1984) Heuristic (ZEMEL)

Zemel (1984) developed a partitioning algorithm which relies on the property that in order to solve a MCKP, it is sufficient to find the optimal slope defined as the incremental efficiency of the last item added in the algorithm. The concept of optimal slope, as explained by Kellerer et al.

(2004), are stated in Steps 3, 4 and 5 below. A slope is found by determining extreme items for every class using Equations (4.17) and (4.18). ZEMEL algorithm finds an optimal slope α∗ which

is the optimal solution for the MCKP problem. The proposition of finding an optimal solution by

an optimal slope is explained below.

Mi(α) = arg max pij − αwij, i = 1, ..., m (4.16) j∈Ni

ai = arg min wij, i = 1, ..., m (4.17) ∗ j∈Mi(α )

bi = arg max wij, i = 1, ..., m (4.18) ∗ j∈Mi(α )

∗ For every class Ni, Mi(α ) is determined where Mi(α) is the set of extreme items from the

∗ class i in the negative direction of the slope. α is the optimal slope. Mi(α) is found using Equa-

tion (4.16). For every class i, Mi(α) may contain more than one item. Let ai and bi be the item

with smallest and largest weight in the set Mi given by Equations (4.17) and (4.18), respectively. m X Select an item ai from every class Ni and compute the residual knapsack capacity as c − wiai . i=1 Exchange item ai with bi by selecting a class arbitrarily and repeat this until the new residual ca- m m X X pacity c + wiai − wibi < 0 for some class i. Set xbi = c/(wbi − wai ) and xai = 1 − xbi . i=1 i=1 This is a LP-optimal solution with ai and bi being extreme items.

56 An outline of the ZEMEL algorithm is as follows:

Step 1: For all classes Ni, pair the items as (ij, ik). Order each item as wij ≤ wik breaking ties using pij ≥ pik. If item j dominates item k in class Ni, then delete item k from class Ni and pair item j with another item from class Ni. Continue until all the items in class Ni have been paired.

For an odd number of items, one item remains unpaired.

Step 2: For all classes Ni, if the class has only one item j remaining, then reduce the capacity to

c = c − wij, and fathom class Ni.

Step 3: For all the pair of items (ij, ik), compute the slope αijk = (pik − pij)/(wik − wij). Let α denote the median of all the slopes αijk.

Step 4: For i = {1, ..., m}, derive Mi(α), ai, and bi using Equations (4.16), (4.17) and (4.18). m m X X ∗ Step 5: If wiai ≤ c ≤ wibi , then α is the optimal slope α . Stop. i=1 i=1 m X Step 6: If wiai ≥ c, then for the pairs of (ij, ik) with αijk ≤ α, delete item k. Go to step 1. i=1 m X Step 7: If wibi < c, then for the pairs of (ij, ik) with αijk ≥ α, delete item j. Go to step 1. i=1 Dyer et al. (1984) Heuristic (DYERKAYALWALKER)

Dyer et al. (1984) developed a branch-and-bound algorithm to solve the MCKP. The algorithm solves the LP-relaxation and reduces the classes before solving the problem through branch-and- bound. An overview of the DYERKAYALWALKER heuristic is as follows:

Step 1: Remove LP-dominated items by ordering the items in each class Ni by increasing weights, and testing for conditions (4.13) and (4.14). Let Ri contain the remaining items ordered according

to wi1 < wi2, ... < wiri , where ri is the size of Ri.

Step 2: Solve the LP-relaxation to derive an upper bound.

• Develop a KP instance by setting p¯ij = pij −pi,j−1, w¯ij = wij −wi,j−1 for each class Ri and

j = 2, ..., ri. p¯ij is the incremental profit, a measure of the gain if item j is selected instead

57 of item j − 1 in class Ri. w¯ij is the incremental weight which measures an increase in the

weight if item j is selected instead of item j − 1 in class Ri. Compute the residual capacity m X c¯ = c − wi1. The indices associated here are the indices from Ri. i=1

• Sort the items according to decreasing incremental efficiencies e¯ij =p ¯ij/w¯ij. The indices

associated here are the original indices.

m X • Fill the knapsack with capacity c¯. Initialize z = pi1. Every time an item is inserted, set i=1 z = z +p ¯ij, c¯ =c ¯ − w¯ij, xij = 1, and xi,j−1 = 0.

• Let t ∈ Ns be the item exceeding the knapsack capacity. Set xst =c/ ¯ w¯st, xs,t−1 = 1 − xst,

z = z +p ¯stxst. Return LP-solution x and z.

Step 3: Reduce the classes by relaxing constraint (3.8). The upper bound on the MCKP with

1 additional constraint xij = 1, Uij is calculated using Equation (4.19).

1 ∗ Uij =p ¯ − piti + pij + α (c − w¯ + witi − wij) (4.19)

1 0 If Uij ≤ z, then xij = 0. Similarly test for the upper bound for constraint xij = 0 (Uij). For this

case, fix xij = 1, other decision variables in the class are zero. If the reduced set has only one item

j left, the class is fathomed fixing xij = 1.

Step 4: Solve the rest of the problem using branch-and-bound.

4.2.3 Legacy Heuristics for Multiple-choice Multi-dimensional Knap- sack Problems (MMKP)

Moser et al. (1997) Heuristic (MOSER)

Moser et al. (1997) solved the MMKP using Lagrange multipliers. An outline of the MOSER’s heuristic is as follows:

58 Input elements: Profit values of the items pij, i = 1, ..., m, j = 1, ..., n; item weights wijk, i =

1, ..., m, j = 1, ..., n, k = 1, ..., l; knapsack capacities ck, k = 1, ..., l

Output elements: Selected elements wiρik, i = 1, ..., m, k = 1, ..., l, ρi selected item from class i

Step 1: Initialization and normalization of weights

• Reset Lagrange multipliers µk = 0, k = 1, ..., l.

• Find the index of the most valuable item in each class ρi = arg maxj = 1, ..., npij, i =

1, ..., m and select this element.

• Normalize the weights wijk = wijk/ck, i = 1, ..., m, j = 1, ..., n, k = 1, ..., l. m X • Compute the constraint violation yk = wiρik, k = 1, ..., l. i=1

Step 2: Repeat this step until the constraint violation is resolved by exchanging the elements such

that no more elements can be exchanged and yk ≤ 1, k = 1, ..., l.

0 • Determine the most violated constraint by finding index k of the largest yk > 1, k = 1, ..., l.

• Find the item to be exchanged by computing the increase δij of the Lagrange multiplier µk0

for all the non selected items in every class relative to the selected item of the class using

equation (4.20)

l X 0 0 δij = (piρi − pij − µk (wiρik − wijk))/(wiρik − wijk), i = 1, ..., m, j = 1, ..., n k=1 (4.20)

0 0 Find class i and index j of the item to which smallest δij belongs.

• Reevaluate Lagrange multipliers and constraint violations using (4.21) and (4.22).

0 0 µk = µk + δi0j0 (4.21)

y = y − w + w 0 0 , k = 1, ..., l k k iρi0 k i j k (4.22)

59 0 0 • Remove the selected item with index ρi0 of class i . Set the item with index j as the new

0 0 selected item of class i , ρi0 = j .

Step 3: Improve the solution by repeating this step until no more elements can be exchanged.

• Compute the knapsack value increase δij using (4.23) for all non-selected items in every class

relative to the profit value piρi of the selected element.

( pij − piρi , if pij − piρi > 0 and yk − wiρik + wijk ≤ 1, k = 1, ..., l δij = (4.23) 0, otherwise

• Find the best exchangeable item by finding class i0 and index j0 for the largest knapsack value

increase δij.

0 0 0 • Remove the selected item with index ρi0 of class i . Select the item with index j of class i ,

0 ρi0 = j .

m X Step 4: Compute the result yk = wiρik, k = 1, ..., l. If yk ≤ 1, k = 1, ..., l, the problem can be i=1 solved and the solution is wiρik, i = 1, ..., m, k = 1, ..., l. Otherwise the problem cannot be solved.

Khan et al. (2002) Heuristic (HEU)

Khan et al. (2002) used Toyoda’s concept of aggregate resource (Toyoda 1975) for selecting items. Their heuristic has three steps as described below. Let c be the total resource usage vector, y be the current knapsack usage vector, ρ be the current solution vector, ρ/Z be the solution vector after exchange Z from ρ, ∆a be the aggregate resource savings, ∆t be the total-per-unit-resource savings, ∆p be the value gain per unit of extra resource, and U be the total value of the current pick.

The vectors c and y contain values for knapsack utilization.

Step 1: Find a feasible solution

1.1 Pick items with lowest values from every class. Let ρi = 1, i = 1, ..., m be equal to the selected m X item j in every class i. Compute yk = wiρik, k = 1, ..., l. i=1

60 1.2 Find α where yα/cα = maxk=1,...,l yk/ck. If the initial pick is feasible i.e. yα/cα ≤ 1, go to

step 2, otherwise proceed to step 1.3.

1.3 Consider an exchange X = (i, j) from ρ picking item j from class i instead of item ρi. Define l X 0 ∆a(ρ, i, j) = (wiρik − wijk) · yk/ | y¯ |, i = 1, ..., m, j = 1, ..., n. Let ρ = ρ|X. k=1 0 0 0 0 0 Find i and j such that ∆a(ρ, i , j ) = maxij ∆a(ρ, i, j) and yα(ρ ) < yα(ρ).

0 0 if k 6= α and yk(ρ) ≤ ck(ρ) then yk(ρ ) ≤ ck(ρ ) and

0 0 if k 6= α and yk(ρ) > ck(ρ) then yk(ρ ) ≤ ck(ρ ).

If i0 and j0 are found, set ρ = ρ|(i0, j0) and go to step 1.2, otherwise exit the procedure with no

solution found.

Step 2: Improve the initial feasible solution iteratively using feasible upgrades

2.1 Consider an upgrade X = (i, j) where item j is picked from class i instead of item ρi. Find a

feasible upgrade X0 = (i0, j0) that maximizes ∆a(ρ, i, j). If X0 is found and ∆a(ρ, i0, j0) > 0, set

ρ = (ρ|X0) and go to step 2.1.

0 0 0 2.2 Find a feasible upgrade X = (i , j ) that maximizes ∆p(ρ, i, j). Define ∆p(ρ, i, j) = (piρi −

0 0 0 0 pij)/∆a(ρ, i, j). If X is found and ∆p(ρ, i , j ) > 0, set ρ = (ρ|X ) and go to step 2.1.

Repeat until no more feasible upgrades are possible and then go to step 3.

Step 3: Iterative improvement of the upgraded solution using upgrades followed by one or more

downgrade(s)

0 0 0 0 0 3.1 Find an upgrade Y = (i , j ) that maximizes ∆p (ρ, i, j) where ∆p (ρ, i, j) = (piρi −pij)/∆t (ρ, i, j) l 0 X 0 and ∆t (ρ, i, j) = (wiρik − wijk)/(ck − yk). Set ρ = ρ|Y . k=1 3.2 Find a downgrade Y 0 = (i00, j00) that minimizes ∆p00(ρ0, i, j) such that U(ρ0|Y 0) > U(ρ),

l 00 00 00 X where ∆p (ρ, i, j) = (piρi − pij)/∆t (ρ, i, j) and ∆t (ρ, i, j) = (wiρik − wijk)/(ck − yk). k=1 If Y 0 is found and (ρ0|Y 0) is feasible, set ρ = (ρ0|Y 0) and go to step 2.

61 If Y 0 is found and (ρ0|Y 0) is not feasible, set ρ = (ρ0|Y 0) and go to step 3.2.

3.3 Exit procedure with the solution vector ρ.

Hifi et al. (2004) Heuristic (CPCCP)

Hifi et al. (2004) developed a constructive and complementary search approach for solving the MMKP. The initial feasible solution is obtained using a constructive procedure (CP). The CP is a greedy approach with DROP and ADD phases. The CP is detailed as follows. Let S be the

solution vector, ρi the selected item j from class i, xiρi indicates whether or not the item from a

class is selected, ck is the accumulated resources for knapsack k, and yk is the total resource used in knapsack k.

Step 1: Initialization of the solution procedure

Pl 1.1 Define the pseudo-utility ratio for every item j of class i as uij = pij/ k=1 ykwijk.

For every class i = 1, ..., m,

0 Find item j for max{uij, j = 1, ..., n}

0 set Si ← j

0 set ρi = j and xiρi = 1 n X set yk = wiρik i=1 EndFor

Solution vector is S = (S1, ..., Sm). Go to step 1.2.

1.2 Main Loop

DROP Phase

While (yk > ck, k = 1, ..., l)

0 set k ← arg maxk=1,...,l{yk}

0 set i ← arg maxi=1,...,m{wiρik}

0 0 set ρi = j and xi ρi = 0

62 y = y − w , k = 1, ..., l Compute k k iρi0 k ADD Phase

For every item j = 1, ..., n,

0 If (j 6= j and Rk + wi0jk < ck, k = 1, ..., l) then

xi0j = 1

j0 = j

0 ρi0 = j

y = y + w 0 , k = 1, ..., l k k i ρi0 k

0 S = (ρi0 ; ρi, i 6= i , i = 1, ..., m) is a feasible solution

Exit with solution vector S

EndIf

EndFor

0 ji0 ← arg minj=1,...,n{wi0jk0 } if the obtained solution is not feasible

0 ρ 0 = j 0 x 0 = 1 i i ; i ρi0 EndWhile

Return solution vector S with value Z(S)

The Complementary Constructive Procedure (CCP) tries to iteratively improve an initial fea- sible solution obtained using CP. It applies a local swap strategy of picked and non-picked items while maintaining feasibility. The CCP is described below. Let S∗ be an improved solution vector with a solution value of Z(S∗).

∗ Step 1: Set S = (S1, ..., Sm) from CP and set S ← S.

Step 2: Iterative procedure calls the local swap strategy to try and improve the initial feasible solu- tion from CP.

While not StoppingCondition() do

For every class i = 1, ..., m,

63 0 0 ji ← LocalSwapSearch(ji, i), where ji and ji are the old and new items of class i respectively.

0 Si ← ji

0 S ← (S1, ..., ji, ..., Sm) If Z(S) > Z(S∗) then

∗ 0 S ← (S1, ..., ji, ..., Sm) EndWhile

Return solution vector S∗ with value Z(S∗)

The LocalSwapSearch() procedure initializes the best item to swap by setting,

value ← piSi and si ← Si, where piSi is the profit of the old item from class i and si is a candidate

item in i to be swapped.

The LocalSwapSearch() procedure is:

For every item j = 1, ..., n, and j 6= Si do

If (pij > value and yk − wiSik + wijk < ck, k = 1, ..., l) then

set value ← pij

set si ← j

Return Si as the best item for a local swap.

Hifi et al. (2004) Heuristic (DerAlgo)

DerAlgo, developed by Hifi et al. (2004), is a two-phase procedure using a penalty and normal- ized strategy. It is based on a guided local search approach. The starting point of this algorithm is the feasible solution obtained by applying CP discussed above. This solution is considered the best feasible solution obtained before the start of the algorithm without using any penalty. The algorithm uses the LocalSwapSearch() from the CCP heuristic to improve the initial solution.

A penalty phase is applied if the solution does not improve after a certain number of itera-

tions. A penalty parameter Π ∈ [0, 1] is used to transform the profit values of the items and try to

64 find a good neighborhood for improving the solution. The algorithm randomly picks a class and determines the index of the item from the class in the solution vector. The profit value of this item is penalized and the solution is updated. The procedure starts with a new feasible solution for the original problem and finds a good neighborhood to try to improve the current solution. This helps the search procedure diversify the search space by releasing the current solution from the local op- tima and modify the search trajectory. Normalization is used to transform the penalized solution to a normal feasible solution by setting the profit values of the current solution to the original profit values. The penalty and normalization phases are summarized below.

Let 0 < Π < 1 be the penalty coefficient, ρ be the current solution vector in the penalized phase, ∆ be the depth parameter for penalization, and D be the diameter parameter for exploration.

The depth parameter initiates the number of items to penalize. The diameter parameter controls the search space of the better obtained solutions up to now. An outline of the DerAlgo is as follows:

Step 1: Initialization of the solution procedure

Set S∗ ← S := CP () and V (ρ) ← Z(S∗). Initialize the depth, diameter, and phase parameters ∆,

D, and phase respectively as ∆ = D ← 0, and phase ← Normal P hase.

Step 2: Main Step using the two-phase procedure of penalization and normalization of solution

While not StoppingCondition() do

S:=CCP(S)

If V (ρ) ≤ Z(S) then

If (phase = Normal P hase) then

Set S∗ ← S and V (ρ) ← Z(S∗)

Else

Set S ← Normalize(S, ρ, Π)

Set S∗ ← S and V (ρ) ← Z(S∗)

EndIf

Else

65 If (phase = Normal P hase) then

S ← P enalize(S, V (ρ), Π, ∆)

Else

Set S ← Normalize(S, ρ, Π) and V (ρ) ← Z(S)

S ← P enalize(S, V (ρ), Π, ∆)

EndIf

EndIf

Increment(D)

Set ∆ ← Get Depth(∆, D, n)

EndWhile

Return S∗ with value Z(S∗)

The Penalize() procedure is as follows:

Step 1: Initialization of Penalize() procedure

Set the initial solution S∗ ← S and V (ρ) ← Z(S). Set Counter = 0, where Counter is the variable used to control the depth parameter ∆(0 ≤ Counter ≤ ∆).

Step 2: Main loop for Penalize() procedure

While (Counter ≤ ∆) do

Random selection of a class i ← GetClass())

ji ← ρi

piρi ← Π × piρi

Z(S) ← Z(S) − piρi + Π × piρi

Increment(Counter)

EndWhile

Return S with value Z(S) as the penalized current solution

66 The Normalize() procedure normalizes the improved penalized solutions. This procedure is as follows:

Step 1: Initialization of Penalize() procedure

Set the initial solution S∗ ← S and V (ρ) ← O(S0).

Step 2: Main loop for Penalize() procedure

For i = 1, ..., m do

ji ← ρi

0 0 O(S ) ← O(S ) − pij + (1/Π) × piρi

piρi = (1/Π) × piρi

EndWhile

Return S with value O(S) as the penalized current solution

Akbar et al. (2001) Heuristic (M-HEU and I-HEU)

The heuristic developed by Akbar et al. (2001) sorts the items of each class in an increasing order of profit values. The lower-valued items are at the top of every class while the higher-valued items are at the bottom of every class. An infeasibility ratio is defined as fk = yk/ck where yk is the amount of kth resource consumption denoting the sum of resources required by the picked items

th and ck is the amount of available resources. The k resource is feasible if fk ≤ 1.

An outline of M-HEU heuristic follows:

Step 1: Find a feasible solution

1.1 Select the lowest valued item from every class.

1.2 If the resource constraints are satisfied, go to step 2, otherwise the current solution is infeasible.

1.3 Find the resource kl = arg max fk. Select a high valued item from any group which decreases

fkl , does not increase the infeasibility factor of other resources, maintains feasibility, and yields the

67 th highest aggregate resource consumption given by Equation (4.24). With wijk as the amount of k

th th th resource consumption of the j item of i class, ρi as the index of selected items from i class,

th and yk as the amount of the k resource consumption, define

l X ∆aij = (wiρik − wijk) × yk/ | y¯ | (4.24) k=1

∆pij = piρi − pij (4.25)

1.4 If an item is found in step 1.3 then go to step 1.2, otherwise no solution is found.

Step 2: Upgrading the selected feasible solution

2.1 Find a higher valued item from a class, other than the selected item of that class, subject to the

resource constraint with the highest positive value of ∆aij. If no such item is found then an item

with the highest ∆pij/∆aij is chosen.

2.2 If no such item is found in step 2.1 then go to step 3, otherwise check for other items in step 2.1.

Step 3: Find one upgrade followed by at least one downgrade

3.1 If there are higher valued items than the selected item in any class, then find such a higher

0 0 valued item with the highest value of ∆pij/∆a wijk, where ∆aij is the ratio of increased resource requirement to available resource and is given by Equation (4.26).

l 0 X ∆aij = (wiρik − wijk)/(ck − yk) (4.26) k=1

3.2 Find a lower-valued item than the selected item of the classes such that the downgrade gives

00 higher total value than the total value obtained from step 2 and has the highest value of ∆aij/∆pij,

00 where ∆aij is the ratio of decreased resource requirement to over consumed resource and is given by Equation (4.27).

68 l 00 X ∆aij = (wiρik − wijk)/(yk − ck) (4.27) k=1

3.3 If an item is found in step 3.2 and if the item satisfies the resource constraint then check for a

better solution with step 2.

3.4 If an item is found in step 3.2 and if the item does not satisfy the resource constraint then

downgrade with step 3.2.

3.5 If no such item is found in step 3.2 then store the solution found using step 2 and terminate.

As the number of classes increases, M-HEU is not efficient. The authors propose an incremen-

tal heuristic I-HEU to solve the MMKP incrementally from an existing solution of the MMKP with

smaller number of classes. I-HEU consists of similar steps as M-HEU with modifications in step 1.

The modifications in step 1 are as follows:

1.1 Select the lowest valued item from every new class.

1.3 This step is similar to step 1.3 of M-HEU except that any item can be found instead of a higher

valued item.

4.3 Test Problem Analysis

A fundamental contribution of this research is the systematic use of test problems to gain heuristic

performance insight. In this section, existing test problems and test generation approaches are ex-

amined to characterize the problems and highlight shortcomings in existing problems. Test problem

structure is examined in the next subsection.

4.3.1 Test Problem Analysis for Multiple Knapsack Problems (MKP)

Hung and Fisk (1978) generated test problems for the MKP consisting of up to 200 items and up to

6 knapsacks. The profit values and the weights of the items were independently generated from a

69 discrete uniform distribution (10, 100). The knapsack capacities ci were generated from the interval cl ≤ ci ≤ cu based on Equations (4.28) and (4.29).

n X cl = [0.4( wj/m)] (4.28) j=1 n X cu = [0.6( wj/m)] (4.29) j=1

Hung and Fisk (1978) generated the final knapsack capacity cm such that the occupancy Pm Pn ratio= i=1 ci/ j=1 wj = 0.5. The generated capacities are discarded and a new set of capac-

ities are generated if ci < minj wj or maxi ci < maxj wj. The authors used 0.5 as the occupancy ratio for all the problems generated.

Martello and Toth (1985) generated test problems by independently generating the profit pj

and weight wj from a uniform distribution in the interval (10, 100), and the capacity ci values from

a uniform distribution satisfying the conditions (4.30) and (4.31).

n i−1 X X 0 ≤ ci ≤ 0.5( wj) − cu, i = 1, ..., m − 1 (4.30) j=1 u=1 n m−1 X X cm = 0.5 wj − cu, i = 1, ..., m − 1 (4.31) j=1 u=1

The authors varied the number of items n (n = 50, 100, 200, 500, 1000), the number of knap- sacks m (m = 2, 5, 10) and generated 10 problems for each setting of n and m.

Pisinger (1999b) randomly generated four different types of test problem instances using range limits R = 100, 1000, and 10000. These four test problem instances were then classified into:

• Uncorrelated test problem instances: Profit pj and weight wj of item j are randomly dis-

tributed in the interval (10,R).

70 • Weakly correlated test problem instances: Weight of item j (wj) is randomly distributed

in the interval (10,R), and the profit of item j (pj) is randomly distributed in the interval

(wj − R/10, wj + R/10) such that pj ≥ 1.

• Strongly correlated test problem instances: Weight of item j (wj) is randomly distributed in

the interval (10,R), and profit of item j (pj) is set to wj + 10.

• Subset-sum test problem instances: Weight of item j (wj) is randomly distributed in the

interval (10,R), and profit of item j (pj) is set equal to wj.

Pisinger (1999b) considered two different classes of capacities:

• Similar capacities having the first m − 1 capacities ci randomly distributed in the interval

given by Equations (4.28) and (4.29).

• Dissimilar capacities having the capacities ci distributed in the interval given by equation (4.32).

n i−1 X X 0 ≤ ci ≤ 0.5( wj − cu), i = 1, ..., m − 1 (4.32) j=1 u=1

For both the classes of capacities, the capacity of the last knapsack cm is chosen as Equa- tion (4.31) ensuring that the sum of capacities is half the total sum of the weights of items. A new test problem instance is generated if Equations (4.10) through (4.12) are violated.

Table 4.1 summarizes the design in various studies of MKP heuristics involving test problem generation. The studies focus on computational time all but ignoring solution quality and the non- computer specific computational work measured by number of iterations.

4.3.2 Test Problems for Multiple Choice Knapsack Problems (MCKP)

71 Table 4.1: Factors and Measures used in Empirical Analysis of the MKP Heuristics

Author Factors Measures m n D S Σ Tm Acc/Err OpS Iter Hung and Fisk (1978) x x x x x x Martello and Toth (1985) x x x x Pisinger (1999b) x x x x x m= number of knapsacks n= number of items D= Distribution of constraint coefficients S= Slackness of constraints Σ=Correlation induced between problem coefficients Tm= CPU time taken to solve the problem Acc/Err= Accuracy or error between heuristic solution value and optimal solution value OpS= number of problems solved to optimality Iter= number of iterations

Sinha and Zoltners (1979) randomly generated test problems for the MCKP. They generated the profit (pij) and the weight (wij) values using a uniform distribution, ensuring that the same profit values were not repeated within a class, and the same weight values were not repeated within a class. They ordered the profits and the weights in an increasing order ensuring that the test sets had no dominated items. The knapsack capacity was generated using Equation (4.33).

Dyer et al. (1984) randomly generated test problems using Sinha and Zoltner’s (1979) MCKP problem generation method. The objective and the constraint coefficients were independently gen- erated from a uniform distribution on an interval with a lower bound of 100 and a fixed width. The generator prevented the repetition of the coefficients within the multiple-choice set. The right-hand side of Equation (4.33) was calculated as:

72 m X c = (0.5) (min(aj) + max(aj)) (4.33) j∈Ni j∈Ni i=1

Pisinger (1995) generated test problem instances with data in the range R = 1000 or 10000, varying the number of classes m and number of items Ni to be packed in a knapsack. He developed

five test instances by calculating the capacity using Equation (4.33).

• Uncorrelated test problem instances: Ni items in each class are generated by choosing

weights wij and profits pij randomly from the interval (1,R).

• Weakly correlated test problem instances: For every class i, weight of item j (wij) is ran-

domly distributed in the interval (1,R), and profit of item j (pij) is randomly distributed in

the interval (wij − 10, wij + 10) such that pij ≥ 1.

• Strongly correlated test problem instances: For every class Ni items are generated with

weights and profits (wij, pij). wij are randomly distributed in interval (1,R) and pij =

wij + 10. These items are ordered by increasing weights. The weights and the profit values

of the test problem instances for the MCKP are then generated by Equations (4.34) and (4.35)

ensuring no dominated items.

j X 0 wij = wh, j ∈ Ni (4.34) h=1 j X 0 hij = wh, j ∈ Ni (4.35) h=1

• Subset-sum test problem instances: These test problem instances have wij randomly dis-

tributed in the interval (1,R) and have pij = wij.

• Sinha and Zoltners: Pisinger (1995) developed test instances using Sinha and Zoltner’s

(1979) method by selecting the profit values and the weights from a randomly distributed

interval (1,R) and arranging them in an increasing order ensuring no dominated items in the

test problem instances.

73 Kozanidis et al. (2005) generated their test problems using Sinha and Zoltners (1979) MCKP

problem generation method. The profit and weight values were generated from a uniformly dis-

tributed interval (0,R), where R was varied from 100 to 500. Their test problems contained domi-

nated and non-dominated items within a multiple-choice set.

Table 4.2 summarizes the design of various studies of the MCKP heuristics in which test prob- lems were generated. As in the MKP instances, the measures used in the empirical analyses are not particularly beneficial.

Table 4.2: Factors and Measures used in Empirical Analysis of the MCKP Heuristics

Author Factors Measures

m ni DS Σ Tm Acc/Err OpS Iter Sinha and Zoltners (1979) x x x x Armstrong et al. (1983) x x x x Pisinger (1995) x x x x x Kozanidis et al. (2005) x x x m= number of classes n= number of items in each class D= Distribution of constraint coefficients S= Slackness of constraints Σ=Correlation induced between problem coefficients Tm= CPU time taken to solve the problem Acc/Err= Accuracy or error between heuristic solution value and optimal solution value OpS= number of problems solved to optimality Iter= number of iterations

4.3.3 Test Problems for Multiple-choice Multi-dimensional Knapsack Problems (MMKP)

74 Moser et al. (1997) developed MMKP test problems by first considering a MDKP of n items with weights and profits randomly selected from the interval generated between 1 and 20. They then transformed the MDKP to MMKP by dividing the items into m classes, each class containing at least one item. Condition (4.36) was ensured and the test problem generation was restarted if the condition was not satisfied. They varied the number of classes from 2 to 9, the number of knapsacks from 1 to 5, and the number of items in every class from 1 to 100.

n Xi wij ≤ ci ≤ wij, i ∈ {1, ..., m} (4.36) j=1

Akbar et al. (2001) randomly generated MMKP test instances. The pseudo random numbers initialized were:

• kth weight of jth item belonging to ith class, wijk=random(0, (Rc − 1))

• Value per unit resource k, pk=random(0, (Pc − 1))

X • Value of every item, vij= wijkpk+random(0, (Vc − 1)) k

• Resource capacity of kth knapsack is generated as, ck = Rc × m × 0.5, where m is the

number of classes

• Selected item of ith class, ρi=random(0, (Ci − 1)), where Ci=number of items in ith class

• Resource capacity of kth knapsack is exactly equal to the sum of the resources of the selected X items, ck = wiρik i X • Profit values of the selected items, piρi = wiρikpk + Vc k

Rc, Pc, and Vc are upper bounds on resource requirement, unit price of any resource, and extra profit value of an item after its consumed resource price, respectively. The authors randomly generated 10 data sets for each of the parameters l, m, and n by setting Rc = 10, Pc = 10, and Vc = 20.

75 Khan et al. (2002) generated random uncorrelated and correlated MMKP test problem in- stances. For uncorrelated data instances the profit values of the jth item of the ith class, vij were generated randomly by a random(0, (m×Rc/2×Pc/2))×(j +1)/l function. For correlated data in- P stances, these values were generated by the vij= wijk ×Pk×random(0, (m×3×Rc/10×Pc/10))

function where Rc and Pc are the maximum amount of a resource consumption by an item and max-

imum cost of any unit resource, respectively. The interval inside the random() function denotes the

range used to generate the random numbers from a uniform distribution. These instances were gen-

erated by varying the number of classes from 5-30 and 40-400. The number of items in each class

and number of knapsacks considered were 5 and 10.

Table 4.3 summarizes the design of various studies of the MMKP heuristics in which prob- lem instances were generated. In these studies, the authors did relate heuristic quality in terms of nearness to an optimal.

4.4 Problem Structure Analysis of Test Problems

This section examines in some detail the structure of the various test problems employed in past studies of KP-variant heuristic performance.

4.4.1 Structure of MDKP Test Problems

Beasley (2006) provides a set of 48 test problems for the Multi-Dimensional Knapsack Problems from literature. Hill (1998) first analyzed the structure of these 48 test problems. The data avail- able includes the number of knapsacks, number of items, knapsack capacities, and the optimum solution. Correlation values between the objective function and the constraint coefficients and the interconstraint correlation are calculated and plotted in Figures 4.1 and 4.2.

Figure 4.1 plots the range of objective function to constraint coefficient correlation values for the 48 test problems referenced across the X-axis. There are more problem instances with

76 Table 4.3: Factors and Measures used in Empirical Analysis of the MMKP Heuristics

Author Factors Measures l m n D S Σ Tm Acc/Err OpS Iter Moser et al. (1997) x x x x x x Akbar et al. (2001) x x x x x x x x Khan et al. (2002) x x x x x x l= number of knapsacks m= number of classes n= number of items in each class D= Distribution of constraint coefficients S= Slackness of constraints Σ=Correlation induced between problem coefficients Tm= CPU time taken to solve the problem Acc/Err= Accuracy or error between heuristic solution value and optimal solution value OpS= number of problems solved to optimality Iter= number of iterations

negative correlation and the average correlation values centered around zero. These problems have

5 knapsacks, items from 30 to 90, and constraint coefficient correlation ranging from -0.2 to 0.1.

The coefficient correlation ranges of the problems are narrow and not variable given the entire range of correlation values. This would be insufficient to gain an insight about the heuristic performance as influenced by the correlation structure since the ranges are limited, and many are simply due to random sampling.

Figure 4.2 shows the interconstraint coefficient correlation ranges for the above 48 test problem instances. As observed for the correlation between objective function and the constraint coefficients, the interconstraint coefficient correlation varies around zero. Once again the range of interconstraint correlation structures is limited and due to mostly sampling variation versus systematic control in

77 Figure 4.1: Range of Correlation Values Between Objective Function and Constraint Co- efficients for MDKP Standard Test Problems

the problem generation procedure.

Figure 4.2: Range of Correlation Values Between Constraint Coefficients for MDKP Stan- dard Test Problems

4.4.2 Structure of MKP Test Problems

Hung and Fisk’s (1978) MKP test problem instances are unavailable, so a sample was generated us- ing their problem generation method. The number of items were varied as n = 20, 30, 40, 60, 80, 100 and the number of knapsacks were varied as m = 2, 3, 4. 18 problem files each containing 10 prob-

78 lems were generated for each of the combinations of n and m. Correlation values between the objective and the constraint coefficients were calculated and are plotted in Figure 4.3. Figure 4.3 plots the range of objective function to constraint coefficient correlation values for each test prob- lem file generated. The correlation coefficient ranges of these problems are narrow ranging from

-0.6 to 0.3. These average correlation values are centered around zero and do not provide a very large range. The range observed is due to sampling error given the relatively small number of items considered.

Figure 4.3: Range of Correlation Values Between Objective Function and Constraint Co- efficients for Hung and Fisk MKP Test Problems

A sample of Martello and Toth (1985) test problems were generated by varying the number of items as n = 50, 100, 200 items and the number of knapsacks as m = 2, 5, 10, to produce 9 problem files each containing 10 problems. The correlation coefficient values between the objective and the constraint coefficients are plotted in Figure 4.4. The correlation values are distributed from

-0.25 to 0.25 which is a very narrow range. The average correlation values are nearly zero for all the problem files.

Pisinger (1999b) MKP test problem instances were generated for uncorrelated, weakly corre- lated, strongly correlated, and subset-sum problems. The test problems were generated by varying the range limit R = 100, 1000, 10000, the number of items as n = 25, 50, 75, and the number of knapsacks m was set to 5 yielding 9 problem files with 100 problems each. Pisinger (1999b) MKP

79 Figure 4.4: Range of Correlation Values Between Objective Function and Constraint Co- efficients for Martello and Toth MKP Test Problems problems were also analyzed by calculating the correlation values from the generated test problems and via theoretical properties using a population correlation quantification technique from Reilly

(2006).

The MKP is a generalization of the KP such that a subset of items is selected for every knapsack so that each of these subsets satisfies that particular knapsack. The expected population correlation for 0-1 KP can be quantified as in Equation (4.37).

s α2 − 1 Corr(w , p ) = (4.37) j j α2 + 4δ(δ + 1) − 1 where α is the range of the distribution, δ and γ are the nonnegative integers. γ = 0 for weakly correlated coefficients, δ = 0 for strongly correlated coefficients, and δ = 0 and γ = 0 for subset- sum problems.

The population correlation was calculated using Equation (4.37) by varying α = 100, 1000, 10000 and δ and γ according to problem type. The correlation values calculated by Reilly (2006) method and by computation are tabulated in Table 4.4.

The correlation coefficient between the objective and the constraint coefficients for weakly

correlated problems is approximately 0.97 for all the range limits. The correlation is perfect for

80 Table 4.4: Theoretical and Practical Objective and Constraint Coefficient Correlation for Pisinger’s MKP Test Problems

Problem Type α δ γ Theoretical Corr(wj, pj) Observed Corr(wj, pj) Weakly Correlated 100 10 0 0.9786 0.9825 1000 100 0 0.9804 0.9749 10000 1000 0 0.9806 0.9756 Strongly Correlated 100 - 0 1 1 1000 - 0 1 1 10000 - 0 1 1 Subset-Sum 100 0 0 1 1 1000 0 0 1 1 10000 0 0 1 1

strongly correlated and subset-sum problems. These MKP test problem instances fail to have any diversity in problem structure.

4.4.3 Structure of MCKP Test Problems

Sinha and Zoltners (1979) MCKP test problems were generated varying the range limit R =

1000, 10000, number of classes m = 5, 10, 20, and number of items in each class Ni = 5, 10 with 20 problem instances generated for each combination. The correlation coefficient values be- tween the objective and constraint coefficients were calculated and are plotted in Figure 4.5. The

correlation values varied between 0.5 and 0.8 and are narrow with respect to the entire range of

correlation values. The problems generated with this method lack correlation structure diversity.

Since the MCKP is defined as a 0-1 KP with the addition of disjoined multiple-choice con-

straints (Pisinger 1995), the problem structure for Pisinger (1995) MCKP test problems were ana- lyzed via theoretical properties using Reilly (2006) population correlation quantification technique

81 Figure 4.5: Range of Correlation Values Between Objective Function and Constraint Co- efficients for Sinha and Zoltners MCKP Test Problems for 0-1 KP. The correlation values are tabulated in Table 4.5 for different range limits. Once again this problem generation technique is fairly limited.

The correlation values between the objective and the constraint coefficients for weakly corre- lated problems is approximately 0.98 for all the range limits. The correlation value is a perfect one for strongly correlated and subset-sum problems.

4.4.4 Structure of MMKP Test Problems

Moser et al. (1997) MMKP test problems were generated by varying the number of classes as m =

5, 10, 20, the number of items in each class as n = 5, 10, and the number of knapsacks as l = 5, 10 with 10 problems generated for each combination. Figure 4.6 depicts the correlation coefficient ranges for between objective and constraint coefficients. The correlation values are distributed in the range of -0.5 and 0.5 and the average correlation value is centered around zero. While seemingly a large range, these values can be attributed to sampling variation.

Khan et al. (2002) standard test problems are available at Hifi (2006). The data available for each of the test problem instances include the number of resources, number of classes, number of items in each of the classes, number of constraints, resource capacities, exact solution values,

Moser et al. (1997) and the Khan et al. (2002) heuristic approaches. Correlation values between the

82 Table 4.5: Theoretical and Practical Objective and Constraint Coefficient Correlation for Pisinger’s MCKP Test Problems

Problem Type α δ γ Theoretical Corr(wj, pj) Observed Corr(wj, pj) Weakly Correlated 100 10 0 0.9786 0.9801 1000 100 0 0.9804 0.98 10000 1000 0 0.9806 0.9802 Strongly Correlated 100 - 0 1 1 1000 - 0 1 1 10000 - 0 1 1 Subset-Sum 100 0 0 1 1 1000 0 0 1 1 10000 0 0 1 1

objective and the constraint coefficients and correlation values between the constraint coefficients are calculated and summarized in Tables 4.6 and 4.7, respectively. Figures 4.7 and 4.8 plot the analysis of the test problems.

There are 13 test problems provided by Khan et al. (2002), in files labeled I01 to I13. The

Figure 4.7 ranges on the smaller sized problems, I01 to I04, are fairly wide but just on the positive side. The larger sized problems, I07 to I13, have smaller correlation ranges. As the problems get larger, and involve more variables, the sample correlation structures start converging to zero, thus giving the tighter ranges and typically easier problems as problem size increases.

Figure 4.8 shows the range of interconstraint coefficient values for the 13 test problems. The correlation values are evenly distributed on both the positive and the negative side of the correlation axis, are centered around the zero correlation level, and have no effective range.

Correlated test problem instances were generated using the Khan et al. (2002) test problem generator. 24 test problems were generated by varying the number of classes from 5 to 40, number

83 Figure 4.6: Range of Correlation Values Between Objective Function and Constraint Co- efficients for Moser’s MMKP Test Problems

of items as 5, 7, and 9, and number of knapsacks set as 5. Further examination of these problems for the correlation structure between the objective function and the constraints and the interconstraint correlation are plotted in Figures 4.9 and 4.10, respectively. Near zero objective function to con- straint correlation is observed; in some cases their uncorrelated approach generates a larger range of correlation values. Interconstraint correlation hovers around zero meaning the problems effectively involve independent sampling.

4.5 Summary

The knapsack problems and its variants are a difficult class of combinatorial optimization prob- lems. Various heuristic and exact general approaches have been developed to solve these problems.

Greedy heuristic approaches often provide a good bound for exact approaches, can find good ini- tial solutions, and are used to improve the solution quality. Many researchers begin solving a KP by finding an initial solution, propose some main method which uses various greedy approaches as a base heuristic, and then use an improvement phase to obtain a final solution. Although such heuristics have shown competencies based on published computational tests, heuristic performance is really test problem dependent and problem specific. The generality of empirical test results is

84 Table 4.6: Correlation Between Objective and Constraint Coefficients Analysis of Khan’s MMKP Test Problems

Problem File min max (m,n,l) I01 -0.1428 0.9310 (5,5,5) I02 -0.1314 0.8533 (10,5,5) I03 0.0149 0.7392 (15,10,10) I04 -0.0151 0.5843 (20,10,10) I05 -0.0084 0.1268 (25,10,10) I06 0.0211 0.0832 (30,10,10) I07 -0.0062 0.5267 (100,10,10) I08 -0.0031 0.5286 (150,10,10) I09 -0.0158 0.5310 (200,10,10) I10 -0.0188 0.5175 (250,10,10) I11 -0.0060 0.5098 (300,10,10) I12 -0.0004 0.5200 (350,10,10) I13 -0.0022 0.5187 (400,10,10) (m,n,l) represents (classes, items, resources) in problems

only as valid as the test problem suite is representative of actual problems.

This chapter detailed legacy heuristics for KP variants and conducted analyses of test problem structure on available test sets, or on the documented problem generation method. Such analy- ses have not been previously accomplished. The results of the test problem analyses should raise concerns. In general, test problems are not particularly varied in terms of structure and have char- acteristics that are achieved randomly. This raises concerns regarding the generality of past solution results to actual problem instances.

To overcome these limitations, this research next develops a test problem set varying the above mentioned problem attributes, but with more emphasis given to controlling correlation structure and

85 Figure 4.7: Range of Correlation Values Between Objective Function and Constraint Co- efficients for Khan’s MMKP Test Problems

slackness settings or at least obtaining structures over a reasonable range of values. The remaining research focuses on the MMKP. The knowledge gained by conducting an empirical analysis of various heuristic methods on these test problems is used to develop new greedy heuristics and tabu searches. Legacy methods and new methods are tested and compared using existing test problem sets as well as new problem test sets.

86 Table 4.7: Interconstraint Correlation Coefficients Analysis of Khan’s MMKP Test Prob- lems

Problem File min max (m,n,l) I01 -0.2725 0.4832 (5,5,5) I02 -0.2339 0.2510 (10,5,5) I03 -0.1842 0.1810 (15,10,10) I04 -0.1947 0.2266 (20,10,10) I05 -0.1010 0.1140 (25,10,10) I06 -0.1071 0.1280 (30,10,10) I07 -0.1127 0.0650 (100,10,10) I08 -0.0583 0.0362 (150,10,10) I09 -0.0618 0.0296 (200,10,10) I10 -0.0443 0.0337 (250,10,10) I11 -0.0344 0.0264 (300,10,10) I12 -0.0341 0.0303 (350,10,10) I13 -0.0305 0.0261 (400,10,10) (m,n,l) represents (classes, items, resources) in problems

87 Figure 4.8: Range of Correlation Values Between Constraint Coefficients for Khan’s MMKP Test Problems

Figure 4.9: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the correlated MMKP Test Problems generated from Khan’s MMKP Test Problem Generator

88 Figure 4.10: Range of Correlation Values Between Constraint Coefficients for the corre- lated MMKP Test Problems generated from Khan’s MMKP Test Problem Generator

89 5. Empirical Analyses of Legacy MMKP Heuristics and Test Problem Generation

5.1 Introduction

Heuristics are approximate solution techniques and have long been used to solve difficult combi- natorial problems. Heuristic optimization algorithms seek good feasible solutions to optimization problems where the complexity of the problem or the limited time available for its solution does not allow obtaining an exact solution. Unlike exact algorithms, where time-efficiency and resource- usage are main measures of success, two important issues that arise while evaluating heuristics are: how fast solutions are obtained and how close they come to optimal (Rardin and Uzsoy 2001).

According to Hill (1998), heuristic solution procedures allow a modeler to work with more complex but often more interesting problems. Heuristic algorithms are developed and tested for three reasons: to find a feasible solution for a problem that was not previously solved, to improve the performance over existing algorithms, and to compare performance to understand how heuris- tics perform on different classes of problems (too often research ignores the study of this aspect of heuristic design) (Hill 1998). Characteristics of the problem being solved can influence the performance of a heuristic algorithm. These performance influences are recognized but not fully understood. This research seeks performance knowledge useful for solving the MMKP explicitly using empirical analyses as a means to gain heuristic performance knowledge.

As mentioned in Chapters 3 and 4, there are different formulations of the variants of the knap-

90 sack problems. For the standard form of the KP variants, the problem characteristics examined are number of classes, number of items of each class, and total number of items of each instance.

For MDKP, there are three important indicators of problem difficulty: the number of variables in the problem, the number of constraints in the problem, and the tightness between the constraints

(Cho et al. 2003b). Hill and Reilly (2000) found that coefficient correlation structure and constraint slackness settings affected solution procedure performance in a two-dimensional knapsack problem.

They found that population correlation structure, particularly interconstraint correlation, is a signif- icant factor influencing the performance of solution methods. Additionally, the interaction between constraint slackness and population correlation structure also influences heuristic performance. No comparable problem characteristic insight exists for the other KP variants.

This chapter discusses problem generation and develops a test problem generation method for the MMKP. It also provides an empirical analyses of the heuristic solution procedures for the

MMKP. The legacy heuristic approaches proposed by Moser et al. (MOSER, 1997), Khan et al.

(HEU, 2002), and Hifi et al. (CPCCP and DerAlgo, 2004) for the MMKP are compared on Khan’s

13 available test problem instances, a small set generated per an approach proposed by Khan, and on newly generated MMKP test sets.

5.2 Problem Generation and Problem Characteristics for MMKP

The previous chapter examined the lack of diversity in the problem characteristics of existing

MMKP test problem instances. Empirical studies compare heuristic performance using test prob- lems. Researchers can use available test problems (such as those available via the internet (Beasley

2006)) or randomly generated test problem instances to verify their algorithm performance. Since the existing standard MMKP test problem instances are not particularly diverse in the range of problem characteristics, the experimental information gained using these sets to examine heuristic performance is necessarily restricted. This section discusses the current MMKP test problem gener-

91 ation approach, a new test problem generation methodology for the MMKP test problem instances using the Cho (2005) competitive MDKP test sets, and an extended set of new MMKP test sets.

The new MMKP test sets generated from the problem generation schemes discussed below can be used to study the performance of the legacy heuristics and draw performance conclusions based on a wider range of test problems.

5.2.1 Standard MMKP Test Problem Generation

The thirteen test problems generated by Khan et al. (2002), available at Hifi (2006), have become somewhat of the standard MMKP test problems. The problem generation scheme for these problems is summarized below.

Khan et al. (2002) generated random and correlated MMKP test problem instances. For uncor- related data instances, the profit values of the jth item of the ith class, vij, were generated randomly by a random(m × Rc/2 × Pc/2)×(j + 1)/l function. For correlated data instances, these values P were generated by the vij= wijk × Pk×random(m × 3 × Rc/10 × Pc/10) function where Rc and

Pc are the maximum amount of resource consumption by an item and maximum cost of any unit resource, respectively. These instances were generated varying the number of classes from 5-30 and

40-400. The number of items in each class and number of knapsacks considered were 5 and 10.

Khan’s problem generation approach was mimicked to generate 24 test problems varying the number of classes from 5 to 40, number of items as 5, 7, and 9, and number of knapsacks set as

5. The resulting problem correlation structure for the MMKP test problems was studied in Sub- section 4.4.4. That analysis indicated that the range of the correlation values between the objective and constraint coefficients is very narrow. The interconstraint correlation values averaged zero. This correlation range is far too narrow to generalize any performance conclusions to the potentially wide range of actual problems. All the test problems also have the same constraint capacity values. It is doubtful actual problems have identical constraints. Thus, these test problems are inadequate both in quantity and in the diversity in problem structure. What is needed is a more diverse and adequate

92 MMKP test set. The following subsections discuss the new MMKP test problem set generation methods developed by varying the problem attributes emphasizing in particular problem correlation structure and constraint capacity settings.

The first method generates problems useful for empirical analysis aimed at understanding heuristic performance. These are called “analytical” problems and parameter levels are systemati- cally varied. The second method generates problems useful for computational testing of heuristics.

These are called “competitive” problems and parameter levels are selected to randomly cover a de- sired range. Each method starts with a base set of MMKP problems whose primary characteristic is a wide range of correlation structures but whose right-hand side values are not set; these values are set systematically according to defined experimental settings to produce the analytical set.

5.2.2 Analytical MMKP Test Problem Generation

The base MMKP test set was generated using the Cho (2005) competitive test sets. These test sets were generated by Cho (2005) with a wide range of correlation structure (−0.9, 0.9) and varied con-

straint right-hand side values. This test set overcomes test problem limitations associated with the

MDKP. The problem generation method for the competitive MDKP test set is provided in Appendix

B.

The base MMKP test sets were generated by varying the number of classes as 5, 10, 25; the

number of knapsack constraints as 5, 10, 25; and fixing the number of items in every class to 10.

Nine test files containing 30 problems each, giving a total of 270 test problem instances were gen-

erated. Each file corresponds to a different combination of number of classes and number of knap-

sack constraints. The file naming of the different combinations are: 5GP10IT5KP, 10GP10IT5KP,

25GP10IT5KP, 5GP10IT10KP, 10GP10IT10KP, 25GP10IT10KP, 5GP10IT25KP, 10GP10IT25KP,

25GP10IT25KP where GP represents number of classes, IT represents number of items in every

class, and KP represents number of knapsack constraints.

The 3870 analytical test problems for the MMKP were generated by setting the right-hand side

93 Table 5.1: Coded Combinations of Slackness Settings for 5 Knapsacks

Combination S1 S2 S3 S4 S5 1 0 0 0 0 0 2 0 0 0 0 1 3 0 0 0 1 1 4 0 0 1 1 1 5 0 1 1 1 1 6 1 1 1 1 1

values of the knapsack constraints in the base problem sets. Two constraint tightness levels were considered, loose and tight, coded as 0 and 1 where 0 indicates the loose setting and 1 indicates the tight setting. Of the 25 = 32 combinations for 5 knapsacks, 6 combination settings were considered.

Combination 1 represents all the knapsack constraints loose while Combination 6 represents all the knapsack constraints tight. Similarly, 11 and 26 combinations were considered for 10 and 25 knapsacks, respectively. This reduction in design settings follows the practice established in Cho

(2005). Tables 5.1, 5.2, and 5.3 show the different combinations of the constraint tightness settings.

For each of the three problem sets with 5 knapsacks, 180 problems were generated. For the three problem sets with 10 knapsacks, 330 problems were generated. For the three problem sets with 25 knapsacks, 780 problems were generated. Thus, a total of 3870 MMKP problem instances were generated.

The tightness levels for a constraint, Sk, were set to 0.6 for tight and 0.9 for loose. The right- hand side values for knapsack k were set using Equation (5.1).

ck = Sk × [max {value of constraint coefficient of k} + (m − 1) × range] (5.1) where m is the number of classes and range is the range of the values of the constraint coefficients.

94 Table 5.2: Coded Combinations of Slackness Settings for 10 Knapsacks

Combination S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 1 0000000000 2 0000000001 3 0000000011 4 0000000111 5 0000001111 6 0000011111 7 0000111111 8 0001111111 9 0011111111 10 0 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1

The new MMKP test set generated above yields a diverse correlation range as shown in Fig-

ures 5.2 and 5.3. The problems have a correlation range varying from (−0.9, 0.9). The analytical

MMKP problem set is used to study the effect of problem structure characteristics on the perfor- mance of various legacy heuristics; the set provides the ability to examine the effects of correlation structure, varied right-hand side levels, and the interaction of the two characteristics.

5.2.3 Competitive MMKP Test Problem Generation

The base MDKP problem sets were used to generate competitive problem test sets. Rather than systematically varying constraint slackness settings, the slackness ratios are generated uniformly between (0.6, 0.9). For a tighter distribution (0.3, 0.7), the constraints were very tight yielding no solution to the problems.

95 Table 5.3: Coded Combinations of Slackness Settings for 25 Knapsacks

Combination S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 1 0000000000000000000000000 2 0000000000000000000000001 3 0000000000000000000000011 4 0000000000000000000000111 5 0000000000000000000001111 6 0000000000000000000011111 7 0000000000000000000111111 8 0000000000000000001111111 9 0000000000000000011111111 10 0000000000000000111111111 11 0000000000000001111111111 96 12 0000000000000011111111111 13 0000000000000111111111111 14 0000000000001111111111111 15 0000000000011111111111111 16 0000000000111111111111111 17 0000000001111111111111111 18 0000000011111111111111111 19 0000000111111111111111111 20 0000001111111111111111111 21 0000011111111111111111111 22 0000111111111111111111111 23 0001111111111111111111111 24 0011111111111111111111111 25 0111111111111111111111111 26 1111111111111111111111111 Table 5.4: Correlation Between Objective and Constraint Coefficients Analysis of Avail- able MMKP Test Problems

Problem File min max (m,n,l) I01 -0.1428 0.9310 (5,5,5) I02 -0.1314 0.8533 (10,5,5) I03 0.0149 0.7392 (15,10,10) I04 -0.0151 0.5843 (20,10,10) I05 -0.0084 0.1268 (25,10,10) I06 0.0211 0.0832 (30,10,10) I07 -0.0062 0.5267 (100,10,10) I08 -0.0031 0.5286 (150,10,10) I09 -0.0158 0.5310 (200,10,10) I10 -0.0188 0.5175 (250,10,10) I11 -0.0060 0.5098 (300,10,10) I12 -0.0004 0.5200 (350,10,10) I13 -0.0022 0.5187 (400,10,10) (m,n,l) represents (classes, items, resources) in problems

5.2.4 Analytical MMKP Test Sets Versus Available MMKP Test Set

This section compares the problem structure of the available MMKP test set with the new MMKP test set. Khan’s (Khan et al. 2002) available test problems available at Hifi (2006) have 13 single test instances labeled I01 to I13 while the new MMKP test set generated has 9 files of 30 problems each.

Correlation structure in Khan’s 13 available test problems does not show much variation while the base test sets were generated by varying the correlation between (−0.9, 0.9). Tables 5.4 and 5.5, respectively, show the variation in the correlation structure of the available MMKP test sets and the base MMKP test sets.

97 Table 5.5: Correlation Between Objective and Constraint Coefficients Analysis of New MMKP Test Sets

Problem File min max (m,n,l) MMKP01 -0.895 0.885 (5,10,5) MMKP02 -0.883 0.892 (10,10,5) MMKP03 -0.875 0.871 (25,10,5) MMKP04 -0.884 0.899 (5,10,10) MMKP05 -0.881 0.882 (10,10,10) MMKP06 -0.887 0.883 (25,10,10) MMKP07 -0.898 0.905 (5,10,25) MMKP08 -0.902 0.904 (10,10,25) MMKP09 -0.889 0.890 (25,10,25) (m,n,l) represents (classes, items, resources) in problems

Figure 5.1 graphically shows the correlation range in Khan’s 13 test problem instances. Fig- ure 5.2 shows the variation of the correlation values for the base MMKP problem set with 30 prob- lem instances generated with 5 classes, 10 items, and 5 knapsacks. The correlation structure for the other base MMKP problems are in Appendix A. Figure 5.3 plots the correlation structure for the entire set of the MMKP test problem instances.

There is no variation in the right-hand side values of the problem constraints in Khan’s 13 available test problems; the right-hand side values are the same for all the knapsack constraints in every problem. The new MMKP test problem instances are generated using Equation (5.1). Each of the knapsack constraints has varied capacity. Tables 5.6 and 5.7, respectively, show the right-hand side variation of the available MMKP test sets and the analytical MMKP test sets.

Figures 5.4 and 5.5 graphically show the right-hand side range of Khan’s 13 test problem instances and the newly generated MMKP test problem instances, respectively.

An adequate test set should provide a wide range of test problem instances. Figures 5.1 and

98 Figure 5.1: Range of Correlation Values Between Objective Function and Constraint Co- efficients for Available MMKP Test Problems

Figure 5.2: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the generated MMKP Test Problem with 5 classes, 10 items, 5 knapsacks

99 Figure 5.3: Range of Correlation Values Between Objective Function and Constraint Co- efficients for New MMKP Test Sets

Table 5.6: Right-Hand Side Analysis of the Knapsack Constraints in Available MMKP Test Problems

Problem File min max (m,n,l) I01 25 25 (5,5,5) I02 50 50 (10,5,5) I03 75 75 (15,10,10) I04 100 100 (20,10,10) I05 125 125 (25,10,10) I06 150 150 (30,10,10) I07 500 500 (100,10,10) I08 750 750 (150,10,10) I09 1000 1000 (200,10,10) I10 1250 1250 (250,10,10) I11 1500 1500 (300,10,10) I12 1750 1750 (350,10,10) I13 2000 2000 (400,10,10) (m,n,l) represents (classes, items, resources) in problems

100 Table 5.7: Right-Hand Side Analysis of the Knapsack Constraints of Analytical MMKP Test Sets

Problem File min max (m,n,l) MMKP01 113 368 (5,10,5) MMKP02 265 755 (10,10,5) MMKP03 649 1993 (25,10,5) MMKP04 124 367 (5,10,10) MMKP05 238 772 (10,10,10) MMKP06 618 1970 (25,10,10) MMKP07 112 387 (5,10,25) MMKP08 238 783 (10,10,25) MMKP09 624 1921 (25,10,25) (m,n,l) represents (classes, items, resources) in problems

Figure 5.4: Range of the Right-Hand Side Values of the Knapsack Constraints of Available MMKP Test Problems

101 Figure 5.5: Range of the Right-Hand Side Values of the Knapsack Constraints of Analytical MMKP Test Sets

5.3 indicate that the available test problems have too narrow a range of correlation structure while the new test set has a wide range of correlation values. Figures 5.4 and 5.5 show that the available test problems have similar values of the right-hand sides for the knapsack constraints while the new test set varies the range of the right-hand side values. When test set instances do not provide enough diversity, empirically-based insights are not likely to generalize to real-world problem instances.

5.3 Empirical Analyses of MMKP Heuristics on Available Test Problems

The legacy heuristic procedures proposed by Moser et al. (MOSER, 1997), Khan et al. (HEU, 2002), and Hifi et al. (CPCCP and DerAlgo, 2004) for the MMKP were programmed in Java and tested on the 13 test problems developed by Khan et al. (2002). The reported results, by the corresponding authors, are summarized in Table 5.8.

102 Despite diligent efforts, the implemented versions of the legacy heuristics sometimes failed to meet reported results. The implemented versions faithfully capture the algorithms as published

(via pseudo code). Attempts to obtain author versions of the codes were unsuccessful. Subsequent comparative tests involve the implemented versions since new test problems are employed. The optimal results obtained by programming and testing the respective heuristics on the 13 available test problems and the corresponding elapsed times in milliseconds are summarized in Table 5.9.

The results indicate that exact optimal solutions could not be found for larger test problems I07 through I13. CPCCP and DerAlgo found solutions for all the 13 available MMKP test problem instances. The solution quality of CPCCP was close to that of DerAlgo while the computational time for CPCCP was the least amongst all the heuristics. MOSER and HEU often failed to find a solution.

Tables 5.10 and 5.11 summarize competitive performance and the percentage relative error of the legacy heuristics on Khan’s 13 test problems. DerAlgo performs the best the most number of times. CPCCP and DerAlgo have better performance than the other heuristics. MOSER and HEU often failed to find a feasible solution. As the size of the problem increases, the percentage relative error by CPCCP and DerAlgo decreases. Considering just the number of problems solved, HEU has the least average percentage relative error.

Twenty-four test problem instances were generated using Khan’s test problem generator vary- ing the number of classes from 5 to 40; number of items from 5, 7, 9; and number of knapsack set to

5. The legacy MMKP heuristic methods were tested on these generated test problems. The results of the heuristic performance on these test problem instances are tabulated in Table 5.12.

Tables 5.13 and 5.14 summarize competitive performance and the percentage relative error of the legacy heuristics on these test problems. These results indicate that HEU compares favorably with the other heuristics. CPCCP and DerAlgo perform better than MOSER while CPCCP is better than DerAlgo. CPCCP is the fastest heuristic while HEU and MOSER are slower. The average percentage relative error for MOSER is higher than other heuristics and the MOSER and HEU

103 Table 5.8: Reported Results of the Legacy MMKP Heuristics on the 13 Available Test Problems

Problem File Exact MOSER HEU CPCCP DerAlgo I01 173 151 167 161 173 I02 364 291 354 341 356 I03 1602 1464 1533 1511 1553 I04 3597 3375 3437 3397 3502 I05 3905.7 3905.7 3899.1 3591.59 3943.221 I06 4799.3 4115.2 4799 4567.9 4799.3 I07 23983* 23556 23912 23753 23983 I08 36007* 35373 35979 35485 36007 I09 48048* 47205 47901 47685 48048 I10 60176* 58648 59811 59492 60176 I11 72003* 70532 71760 71378 72003 I12 84160* 82377 84141 83293 84160 I13 96103* 94166 96003 95141 96103 * LP Bound 1 Hifi et al. (2004) report 3943.22; 3905.7 is solution found

104 Table 5.9: Obtained Results of the Legacy MMKP Heuristics on the 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CPCCP Time DerAlgo Time I01 173 - - 154 62 159 47 159 172 I02 364 294 125 354 109 312 78 312 266 I03 1602 1127 297 1518 453 1407 172 1432 500 I04 3597 2906 625 3297 562 3322 234 3322 688 I05 3905.7 1068.3 1062 3894.5 1265 3889.9 234 3905.7 781 I06 4799.3 1999.5 1657 4788.2 1219 4723.1 360 4723.1 797

105 I07 23983* 20833 20594 - - 23237 1516 23480 2219 I08 36007* 31643 59907 34338 8453 35403 3266 35525 3875 I09 48048* - - - - 47154 5562 47471 6094 I10 60176* - - - - 58990 7547 59039 8391 I11 72003* - - - - 70685 9359 71018 9922 I12 84160* - - - - 82754 12141 83154 14234 I13 96103* - - - - 94465 22031 94628 19844 - Solution not found * LP Bound Time in ms Table 5.10: Performance Summary of the Legacy Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP DerAlgo

106 MOSER 0 0 - 1 0 0 HEU 3 0 7 - 4 3 CPCCP 0 0 13 9 - 0 DerAlgo 8 1 13 10 9 - 13 total problems Table 5.11: Percentage Relative Error of Legacy Heuristics on Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP DerAlgo I01 - 10.98 8.09 8.09 I02 19.23 2.75 14.29 14.29 I03 29.65 5.24 12.17 10.61 I04 19.21 8.34 7.65 7.65 I05 72.65 0.29 0.40 0.00 I06 58.34 0.23 1.59 1.59 I07 15.27 - 5.49 4.50 I08 14.19 6.89 4.00 3.67 I09 - - 4.09 3.45 I10 - - 3.98 3.90 I11 - - 4.19 3.73 I12 - - 3.85 3.39 I13 - - 4.03 3.86 Total Average 32.65 4.96 5.68 5.29 - Relative Error cannot be calculated since no solution found

107 Table 5.12: Results of the Legacy MMKP Heuristics on the Correlated Test Problems Generated using Khan’s Test Problem Generator

Prob No (m,n,l) Exact MOSER Time HEU Time CPCCP Time DerAlgo Time 1 (5,5,5) 1832 1832 203 - - 1832 16 1832 63 2 (5,7,5) 2147 - - 2027 219 2147 16 2147 500 3 (5,9,5) 2020 1877 203 - - 1863 16 1901 47 4 (10,5,5) 3993 3913 103 - - 3932 47 3932 578 5 (10,7,5) 4326 4282 235 4241 579 4282 62 4282 78 6 (10,9,5) 4259 4256 313 4044 704 4259 515 4259 219 7 (15,5,5) 6335 6275 750 6275 390 6275 46 6275 188 8 (15,7,5) 6606 6295 671 6420 797 6250 468 6377 672 9 (15,9,5) 6662 6536 703 6437 759 6495 125 6495 188 10 (20,5,5) 8360 8034 360 8267 766 8253 110 8253 656 11 (20,7,5) 8799 8625 531 8774 781 8673 109 8742 531

108 12 (20,9,5) 8748 8467 391 8642 969 8475 172 8697 203 13 (25,5,5) 9746 9745 688 9655 875 9723 250 9723 563 14 (25,7,5) 10781 10647 406 10731 984 10276 219 10296 609 15 (25,9,5) 11011 9939 687 10868 891 10787 266 10787 281 16 (30,5,5) 12509 12509 62 12403 812 12509 578 12509 718 17 (30,7,5) 12963 12756 516 12883 1047 12700 609 12700 281 18 (30,9,5) 13310 13074 1547 13278 766 12971 640 13014 625 19 (35,5,5) 14508 14431 890 - - 14298 234 14298 750 20 (35,7,5) 15438 - - 15427 1156 15051 672 15051 828 21 (35,9,5) 15539 15439 1484 15513 1437 15454 781 15454 812 22 (40,5,5) 16636 16476 750 16556 641 16439 234 16439 781 23 (40,7,5) 17353 17228 1406 17276 875 16793 797 16793 812 24 (40,9,5) 18010 - - 17907 1609 17737 719 17824 781 - Solution not found (m,n,l) represents (classes, items, resources) in problems Time in ms heuristics often fail to find feasible solutions. The results are not comparable to results obtained on

Khan’s available test problems.

5.4 Empirical Analyses of MMKP Heuristics on New MMKP Test Problem Set

The legacy heuristic procedures proposed by Moser et al. (MOSER, 1997), Khan et al. (HEU, 2002), and Hifi et al. (CPCCP, 2004) for the MMKP were then tested on the new MMKP test problem set.

The purpose of this research is to gain insights on the effects of problem characteristics on heuristic performance. The heuristics are compared based on number of problems solved to optimal by each of the heuristics and the number of times each heuristic is the best performer. Any ties between heuristics are excluded. The DerAlgo heuristic is no longer considered based on concerns regarding the correctness of the implementation created based on the published literature.

5.4.1 Analyses based on Constraint Right-Hand Side Setting

MMKP with 5 knapsacks

The legacy heuristic performance on the MMKP test sets with 5 classes, 10 items, 5 knapsacks is summarized in Table 5.15. The results show that the number of problems solved to optimal by all the heuristics is high for the problems with loose constraints because the problems are easy to solve. For loose constraints, the number of problems solved to optimal is the same for MOSER and

CPCCP. When multiple constraints are tight, number of problems solved to optimal by MOSER is higher than HEU and CPCCP. When all the constraints are tight, CPCCP has the higher number of problems solved to optimal. The number of problems solved to optimal by HEU is low for all the constraint combinations. The number of problems solved to optimal reduces as the problem constraints get tighter. For the number of times the heuristics perform the best, HEU is outperformed with MOSER and CPCCP being preferred as more of the constraints get tighter.

109 Table 5.13: Performance Summary of the Legacy Heuristics on Extra Generated Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP DerAlgo

110 MOSER 3 2 - 9 10 8 HEU 12 0 14 - 13 12 CPCCP 0 4 12 10 - 10 DerAlgo 2 4 12 11 7 - 24 total problems Table 5.14: Percentage Relative Error of Legacy Heuristics on Extra Generated Test Prob- lems

Prob No MOSER HEU CPCCP DerAlgo 1 0.00 - 0.00 0.00 2 - 5.59 0.00 0.00 3 7.08 - 7.77 5.89 4 2.00 - 1.53 1.53 5 1.02 1.96 1.02 1.02 6 0.07 5.05 0.00 0.00 7 0.95 0.95 0.95 0.95 8 4.71 2.82 5.39 3.47 9 1.89 3.38 2.51 2.51 10 3.90 1.11 1.28 1.28 11 1.98 0.28 1.43 0.65 12 3.21 1.21 3.12 0.58 13 0.01 0.93 0.24 0.24 14 1.24 0.46 4.68 4.50 15 9.74 1.30 2.03 2.03 16 0.00 0.85 0.00 0.00 17 1.60 0.62 2.03 2.03 18 1.77 0.24 2.55 2.22 19 0.53 - 1.45 1.45 20 - 0.07 2.51 2.51 21 0.64 0.17 0.55 0.55 22 0.96 0.48 1.18 1.18 23 0.72 0.44 3.23 3.23 24 - 0.57 1.52 1.03 Total Average 2.10 1.42 1.96 1.62 - Relative Error cannot be calculated since no solution found

111 Table 5.16 contains the results for the heuristic performance on the problems with 10 classes,

10 items, and 5 knapsacks. All three heuristics do well on problems with loose constraints. The results in Table 5.16 mirror the results in Table 5.15 although the MOSER approach appears to have the advantage in providing the best results among the three heuristics.

The results for the number of problems solved to optimal and number of times best for the problem set with 25 classes, 10 items, and 5 knapsacks are reported in Table 5.17. As before, all heuristics do well on problems with loose constraints while MOSER and CPCCP are equally adept on the more difficult problems.

MMKP with 10 knapsacks

Table 5.18 reports results on problems with 5 classes, 10 items, and 10 knapsacks. Tables 5.19 and

5.20 report results on problems with 10 classes and 25 classes, respectively. All three heuristics do well on problems with loose constraints although the HEU performance is degraded when compared to the 5 knapsack problems. In terms of returning a best solution, MOSER and CPCCP appear quite comparable until the 25 class problem set (Table 5.20) where the MOSER heuristic seems to have a definite advantage. For these larger problems, HEU does not compete well.

MMKP with 25 knapsacks

Table 5.21 reports results on problems with 5 classes, 10 items, and 25 knapsacks. Tables 5.22 and

5.23 report results on problems with 10 classes and 25 classes, respectively. None of the heuristics do particularly well in terms of finding the optimal solution although each heuristic finds more optimal solutions as the number of classes increase. An interesting aspect of these results involve the improved performance of CPCCP relative to MOSER on the problems with fewer classes (5 and

10 classes). For the larger classes, Table 5.23, results are mixed in terms of whether MOSER or

CPCCP is preferred.

112 Table 5.15: Number of times best heuristic for each of the Right-Hand Side Combinations for 5 classes, 10 items, 5 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 22 16 23 1 3 2 2 18 4 13 8 5 1 3 14 4 11 9 3 6 4 11 1 8 10 3 5 5 5 1 5 12 2 12 6 3 0 6 10 0 18

Table 5.16: Number of times best heuristic for each of the Right-Hand Side Combinations for 10 classes, 10 items, 5 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 28 23 27 0 1 1 2 18 7 17 5 4 2 3 13 4 12 5 4 2 4 8 1 10 11 2 5 5 6 0 5 17 2 6 6 4 0 2 14 3 12

113 Table 5.17: Number of times best heuristic for each of the Right-Hand Side Combinations for 25 classes, 10 items, 5 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 29 23 30 0 0 1 2 16 6 17 2 10 1 3 12 2 12 7 6 5 4 9 0 10 10 3 8 5 5 0 6 11 4 10 6 3 0 4 12 0 15

Table 5.18: Number of times best heuristic for each of the Right-Hand Side Combinations for 5 classes, 10 items, 10 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 20 10 21 3 1 4 2 14 3 15 7 1 9 3 12 0 10 12 1 5 4 9 0 9 13 1 9 5 10 0 7 11 0 13 6 4 0 6 18 0 10 7 2 0 2 16 0 12 8 2 0 1 17 1 11 9 1 0 1 11 0 19 10 2 0 0 12 0 15 11 0 0 1 7 0 21

114 Table 5.19: Number of times best heuristic for each of the Right-Hand Side Combinations for 10 classes, 10 items, 10 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 22 14 23 2 3 2 2 12 2 14 7 5 4 3 9 1 10 11 3 4 4 7 0 7 14 3 6 5 5 0 5 14 3 8 6 2 0 1 19 4 6 7 1 0 2 18 1 10 8 2 0 2 14 1 14 9 0 0 1 14 0 16 10 1 0 0 17 0 13 11 0 0 0 15 0 15

115 Table 5.20: Number of times best heuristic for each of the Right-Hand Side Combinations for 25 classes, 10 items, 10 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 29 19 28 0 1 0 2 21 4 21 3 4 1 3 15 0 12 9 3 5 4 7 0 6 15 2 6 5 3 0 3 19 0 8 6 3 0 3 22 0 5 7 2 0 2 23 0 4 8 1 0 1 22 0 7 9 0 0 0 21 0 9 10 0 0 0 20 0 10 11 0 0 0 14 0 16

116 Table 5.21: Number of times best heuristic for each of the Right-Hand Side Combinations for 5 classes, 10 items, 25 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 11 2 10 10 2 9 2 14 0 6 20 0 7 3 8 0 5 15 0 12 4 6 0 1 16 0 12 5 5 0 3 16 2 11 6 3 0 3 17 0 13 7 2 0 3 13 0 14 8 2 0 2 14 0 14 9 1 0 2 9 0 17 10 0 0 1 7 0 18 11 1 0 1 5 0 18 12 0 0 1 7 0 17 13 0 0 1 5 0 16 14 0 0 0 4 0 15 15 0 0 0 3 0 15 16 0 0 1 1 0 12 17 0 0 0 2 0 10 18 0 0 0 3 0 7 19 0 0 0 1 0 5 20 0 0 0 1 0 5 21 0 0 0 0 0 4 22 1 0 0 3 0 3 23 0 0 0 1 0 1 24 0 0 0 1 0 1 25 0 0 0 0 0 1 26 0 0 0 0 0 1 117 Table 5.22: Number of times best heuristic for each of the Right-Hand Side Combinations for 10 classes, 10 items, 25 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 14 9 16 6 0 6 2 13 1 12 9 1 10 3 10 0 8 19 0 5 4 5 0 6 16 1 9 5 8 0 5 17 0 9 6 4 0 4 17 0 10 7 2 0 3 15 0 13 8 2 0 2 17 0 12 9 0 0 1 13 0 17 10 1 0 1 15 0 13 11 0 0 0 11 0 19 12 0 0 0 12 0 18 13 0 0 0 9 0 20 14 0 0 0 11 0 19 15 0 0 0 10 0 20 16 0 0 0 5 0 25 17 0 0 0 5 0 24 18 0 0 0 5 0 23 19 0 0 0 3 0 25 20 0 0 0 5 0 22 21 0 0 0 0 0 27 22 0 0 0 1 0 28 23 0 0 0 2 0 25 24 0 0 0 1 0 25 25 0 0 0 0 0 26 26 0 0 0 1 0 23 118 Table 5.23: Number of times best heuristic for each of the Right-Hand Side Combinations for 25 classes, 10 items, 25 knapsacks

No. problems solved to optimal No. of times best Combination MOSER HEU CPCCP MOSER HEU CPCCP 1 22 15 20 3 3 2 2 12 1 10 14 4 2 3 10 0 9 16 3 3 4 6 0 3 22 2 3 5 4 0 2 26 2 0 6 0 0 0 24 2 4 7 0 0 0 23 1 6 8 0 0 0 19 2 9 9 0 0 0 22 1 7 10 0 0 0 20 1 9 11 0 0 0 20 1 9 12 0 0 0 21 1 8 13 0 0 0 20 1 9 14 0 0 0 18 1 11 15 0 0 0 18 0 12 16 0 0 0 18 0 12 17 0 0 0 17 0 13 18 0 0 0 17 0 13 19 0 0 0 12 0 18 20 0 0 0 9 0 21 21 0 0 0 9 0 21 22 0 0 0 10 0 20 23 0 0 0 10 0 20 24 0 0 0 9 0 21 25 0 0 0 5 0 25 26 0 0 0 10 0 20 119 5.4.2 Analyses based on Correlation Structure

This section studies how the correlation structure of the problems affects heuristic performance. To study the effect of the correlation structure on the solution returned by the heuristic, a regression analysis was carried out on the analytical test problems generated and the solution obtained by each of the heuristics. The hypothesis is:

H0: Correlation structure has no effect on the heuristic solution.

H1: Correlation structure has an effect on the heuristic solution.

The test statistic was calculated using Equation (5.2)

MSR F ∗ = (5.2) MSE

The decision rules used are in Equations (5.3) and (5.4).

If F ∗ ≤ F (1 − α; p − 1, n − p), conclude H0 (5.3)

If F ∗ > F (1 − α; p − 1, n − p), conclude H1 (5.4) where the level of significance α used is 0.1, p − 1 is the numerator degrees of freedom, and n − p

is the denominator degrees of freedom. The legacy heuristic algorithms were tested on the new

MMKP test problems. All significance levels used are for each individual test.

Overall Regression Analysis

The regression analysis involved all the design combinations for each of the problem files. The

regression model is made up of the predictor variables and the response variable. The predictor

variables for this model were the correlation value between the objective and the constraint coef-

ficients for each of the constraints, while the response variable was the solution obtained by each

120 Table 5.24: Hypothesis Test Results of the Influence of Correlation Structure on Heuristic Performance

Problem File MOSER HEU CPCCP 5GP10IT5KP Reject H0 Reject H0 Reject H0 10GP10IT5KP Reject H0 Reject H0 Reject H0 25GP10IT5KP Reject H0 Reject H0 Reject H0 5GP10IT10KP Reject H0 Reject H0 Reject H0 10GP10IT10KP Reject H0 Reject H0 Reject H0 25GP10IT10KP Reject H0 Reject H0 Reject H0 5GP10IT25KP FTR H0 Reject H0 Reject H0 10GP10IT25KP FTR H0 FTR H0 Reject H0 25GP10IT25KP Reject H0 Reject H0 Reject H0 FTR = Fail to Reject

of the heuristic approaches. The results for the overall regression analysis of the heuristic solution performance versus the correlation structure are tabulated in Table 5.24. The results indicate that the correlation structure does affect heuristic performance for all the problems with 5 knapsacks and 10 knapsacks. As the size of the problems increase with 25 knapsacks, the correlation effect reduces.

Regression Analysis for each of the problem types

Results using each of the new MMKP test problem sets were also analyzed by the correlation struc- ture combination setting. The hypothesis for each of the problem types is:

H0: Correlation structure has no effect on the heuristic solution.

H1: Correlation structure has an effect on the heuristic solution.

The test statistic was calculated using Equation (5.5)

121 MSR F ∗ = (5.5) MSE

The decision rules used are in Equations (5.6) and (5.7).

If F ∗ ≤ F (1 − α; p − 1, n − p), conclude H0 (5.6)

If F ∗ > F (1 − α; p − 1, n − p), conclude H1 (5.7) where the level of significance α used is 0.1, p − 1 is the numerator degrees of freedom, and n − p

is the denominator degrees of freedom. The predictor variables for this model were the correlation

value between the objective and the constraint coefficients for each of the constraints, while the

response variable was the solution obtained by each of the heuristic approaches. Table 5.25 contains the results using the new MMKP test sets with 5 knapsacks by right-hand side combination settings.

The tabulated results in Table 5.25 indicate that for Combination 1 with 5 knapsacks where the constraints are loose, and the problem is easy to solve, correlation structure does not have any effect on the heuristic performance of MOSER, HEU, and CPCCP for 5 and 25 classes. As multiple con- straints get tighter, the correlation structure affects heuristic performance indicating an interaction effect between constraint settings and correlation.

Table 5.26 summarizes the analyses on the effect of the correlation structure on the heuristic performance based on the constraint tightness settings for the problems with 10 knapsacks. The results indicate that for Combination 1 where the problems are easy to solve, correlation structure does not effect heuristic performance. As the constraints get tighter, MOSER and HEU’s perfor- mance is not as affected by the correlation structure while CPCCP’s performance is affected. These results indicate that problem structure affects heuristic performance.

The regression analysis for the correlation structure on the legacy heuristic performance based on the constraint tightness settings for the problems with 25 knapsacks and varying classes is tabu-

122 Table 5.25: Hypothesis Test Results of the Influence of the Correlation Structure on Heuristic Performance based on Constraint Tightness for MMKP problems with 5 knapsacks, 10 items, and varied classes of 5, 10, and 25

5GP10IT5KP 10GP10IT5KP 25GP10IT5KP Combination MOSER HEU CPCCP MOSER HEU CPCCP MOSER HEU CPCCP 1 FTR H0 FTR H0 FTR H0 FTR H0 Reject H0 Reject H0 FTR H0 FTR H0 FTR H0

123 2 Reject H0 Reject H0 Reject H0 FTR H0 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 3 Reject H0 Reject H0 Reject H0 FTR H0 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 4 Reject H0 FTR H0 Reject H0 Reject H0 FTR H0 FTR H0 Reject H0 Reject H0 Reject H0 5 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 6 Reject H0 - Reject H0 Reject H0 Reject H0 Reject H0 Reject H0 FTR H0 Reject H0 FTR = Fail to Reject - Solution not found Table 5.26: Hypothesis Test Results of the Influence of the Correlation Structure on Heuristic Performance based on Constraint Tightness for MMKP problems with 10 knapsacks, 10 items, and varied classes of 5, 10, and 25

5GP10IT10KP 10GP10IT10KP 25GP10IT10KP Combination MOSER HEU CPCCP MOSER HEU CPCCP MOSER HEU CPCCP 1 FTR H0 Reject H0 FTR H0 FTR H0 FTR H0 FTR H0 FTR H0 FTR H0 FTR H0 2 FTR H0 FTR H0 Reject H0 Reject H0 Reject H0 FTR H0 FTR H0 Reject H0 FTR H0 3 FTR H0 FTR H0 Reject H0 FTR H0 Reject H0 FTR H0 FTR H0 Reject H0 FTR H0 4 FTR H0 FTR H0 Reject H0 Reject H0 Reject H0 FTR H0 FTR H0 FTR H0 FTR H0 124 5 FTR H0 FTR H0 FTR H0 FTR H0 Reject H0 Reject H0 FTR H0 FTR H0 Reject H0 6 FTR H0 FTR H0 Reject H0 FTR H0 Reject H0 Reject H0 FTR H0 FTR H0 Reject H0 7 FTR H0 FTR H0 Reject H0 FTR H0 FTR H0 Reject H0 Reject H0 FTR H0 Reject H0 8 FTR H0 FTR H0 Reject H0 FTR H0 FTR H0 Reject H0 Reject H0 - Reject H0 9 FTR H0 - Reject H0 FTR H0 - Reject H0 FTR H0 - Reject H0 10 FTR H0 - Reject H0 FTR H0 - Reject H0 FTR H0 - FTR H0 11 FTR H0 - Reject H0 FTR H0 - Reject H0 FTR H0 - Reject H0 FTR = Fail to Reject - Solution not found lated in Table 5.27. The results indicate a reduction in the problem structure effects on the heuristic performance for larger problems with 25 knapsacks.

5.5 Summary

This chapter examined the test problem generation methods for the available MMKP test sets and our new MMKP test sets. The new MMKP test sets were generated by varying problem characteris- tics such as the number of classes, number of items, number of knapsacks, constraint right-hand side setting, and correlation structure. This chapter then examined performance of each of three MMKP legacy heuristics - MOSER, HEU, CPCCP, based on problem structure - constraint right-hand side setting and correlation structure. A total of 180 problems were generated for the problem sets with

5 knapsacks, a total of 330 problems were generated for the problem sets with 10 knapsacks, and a total of 780 problems were generated for the problem sets with 25 knapsacks. A total of 3870

MMKP problem instances were generated. The research considered which of the heuristics yielded the best solution under varying conditions. The heuristics were empirically analyzed based on the constraint right-hand side settings and the correlation structure.

The empirical analyses based on the constraint right-hand side settings indicate that the per- formance of MOSER and CPCCP is comparable when all the problem constraints are loose. As multiple constraints get tight MOSER generally outperforms CPCCP and HEU. For the problems with all tight constraints, CPCCP is the clear winner, meaning CPCCP is preferred when all con- straints are equivalently defined.

Regression analysis was used to study the effect of the correlation structure on the heuristic performance. The results indicated that there is an effect on the heuristic performance due to the problem correlation structure. Caveats to this conclusion include little effect while problem con- straints are loose and a diminished interaction between problem correlation structure and constraint tightness as the number of knapsacks increase.

125 Table 5.27: Hypothesis Test Results of the Influence of the Correlation Structure on Heuristic Performance based on Constraint Tightness for MMKP problems with 25 knapsacks, 10 items, and varied classes of 5, 10, and 25

5GP10IT25KP 10GP10IT25KP 25GP10IT25KP Combination MOSER HEU CPCCP MOSER HEU CPCCP MOSER HEU CPCCP 1 FTR H0 FTR H0 Reject H0 FTR H0 FTR H0 FTR H0 FTR H0 FTR H0 Reject H0 2 FTR H0 FTR H0 FTR H0 FTR H0 FTR H0 Reject H0 Reject H0 FTR H0 Reject H0 3 FTR H0 FTR H0 FTR H0 Reject H0 FTR H0 Reject H0 Reject H0 FTR H0 FTR H0 4 FTR H0 FTR H0 Reject H0 Reject H0 FTR H0 Reject H0 FTR H0 FTR H0 FTR H0 5 FTR H0 Reject H0 FTR H0 FTR H0 - Reject H0 FTR H0 FTR H0 FTR H0 6 FTR H0 Reject H0 FTR H0 FTR H0 - Reject H0 Reject H0 FTR H0 FTR H0 7 FTR H0 Reject H0 Reject H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0 8 Reject H0 Reject H0 FTR H0 FTR H0 - FTR H0 Reject H0 FTR H0 FTR H0 9 Reject H0 Reject H0 FTR H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0 10 FTR H0 Reject H0 FTR H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0

126 11 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0 12 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0 13 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0 14 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 FTR H0 FTR H0 15 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 16 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 17 FTR H0 - FTR H0 Reject H0 - FTR H0 FTR H0 - FTR H0 18 FTR H0 - Reject H0 FTR H0 - FTR H0 FTR H0 - FTR H0 19 FTR H0 - Reject H0 FTR H0 - FTR H0 FTR H0 - FTR H0 20 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 21 - - FTR H0 Reject H0 - FTR H0 FTR H0 - FTR H0 22 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 23 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 24 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 25 - - FTR H0 - - FTR H0 FTR H0 - FTR H0 26 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR H0 - FTR H0 FTR = Fail to Reject - Solution not found 6. New Greedy Heuristics for the MMKP

6.1 Introduction

This chapter presents new heuristic approaches for solving the MMKP based on the insights gained from the previous empirical study. The first heuristic introduced is a TYPE-based heuristic. TYPE pre-processes a problem and uses that problem-specific knowledge to obtain computational efficien- cies. The second heuristic is developed by solving the relaxed MMKP as a modified MDKP and using a greedy heuristic to obtain a good initial solution. This initial solution is improved by a local improvement phase. The third heuristic extends the second heuristic adding a more aggressive local improvement phase.

6.2 A TYPE-based Heuristic for the MMKP

A TYPE-based heuristic is a heuristic designed to pick a likely best performer among a suite of heuristics by pre-processing a problem. The pre-processing of a problem involves determining problem characteristics such as constraint tightness, problem correlation structure, and problem size, and then selecting a heuristic approach that is likely to produce a best solution from a given collection of heuristic approaches. Such an approach was suggested by Loulou and Michaelides

(1979) and first realized by Cho et al. (2003b) for the MDKP.

In this research, three legacy heuristic approaches, MOSER, HEU, and CPCCP are used as the suite of MMKP legacy heuristics. Based on the previous empirical study in Chapter 5 among three

127 Table 6.1: MMKP Problems with 5 Knapsacks, Different Right-Hand Side Combinations, and Best Heuristic

Problem File Combination 5GP10IT5KP 10GP10IT5KP 25GP10IT5KP 1 HEU(CPCCP) CPCCP CPCCP 2 MOSER MOSER HEU(MOSER) 3 MOSER MOSER MOSER 4 MOSER MOSER MOSER 5 MOSER/CPCCP MOSER MOSER 6 CPCCP MOSER CPCCP

different MMKP heuristics - MOSER, HEU, and CPCCP, the best performer based on the type of the problem and the constraint right-hand side (slackness combination) was determined. From the empirical study for the problems with 5 knapsacks, MOSER is the best performer when multiple constraints are tight. When all the constraints are tight, CPCCP performs the best. MOSER and

CPCCP perform the same for loose constraints. For the problems with 10 knapsacks, MOSER is the best for the problems having less than eight tight constraints while CPCCP performs the best when eight or more of the constraints are tight. In the case of problems with loose constraints, both

MOSER and CPCCP perform the same. Figure 6.1 flowcharts the TYPE-based heuristic defined and implemented.

Tables 6.1 and 6.2 summarize the best heuristic performance for different settings of problem type and right-hand side combinations for problems with 5 and 10 knapsacks with varying number of classes, respectively. The results in these tables are used to specify a TYPE heuristic, a heuristic that uses problem characteristics to pick a specific heuristic. Figure 6.1 details the TYPE heuristic.

Analysis of the TYPE-based Heuristic Results for the MMKP

TYPE-based heuristic was tested on the analytical MMKP problems discussed in Section 5.2.2.

128 Table 6.2: MMKP Problems with 10 Knapsacks, Different Right-Hand Side Combinations, and Best Heuristic

Problem File Combination 5GP10IT10KP 10GP10IT10KP 25GP10IT10KP 1 CPCCP HEU(MOSER/CPCCP) HEU(MOSER/CPCCP) 2 CPCCP MOSER HEU(MOSER) 3 MOSER MOSER MOSER 4 MOSER MOSER MOSER 5 CPCCP MOSER MOSER 6 MOSER MOSER MOSER 7 MOSER MOSER MOSER 8 MOSER MOSER/CPCCP MOSER 9 CPCCP CPCCP MOSER 10 CPCCP MOSER MOSER 11 CPCCP MOSER/CPCCP CPCCP

Figure 6.1: Flowchart for TYPE-based Heuristic

129 Table 6.3: Number of Times Equal to Best of the Legacy Heuristic Solutions according to Problem Type

Problem File MOSER HEU CPCCP TYPE MMKP01 109 48 113 119 MMKP02 133 55 108 131 MMKP03 115 53 115 119 MMKP04 191 25 197 219 MMKP05 204 41 161 204 MMKP06 246 36 152 227

Table 6.3 summarizes the best heuristics under constraint tightness and the type of the problem.

These results are graphically plotted in Figure 6.2. The data for the TYPE-based heuristic perfor- mance includes ties to demonstrate its performance compared to the legacy MMKP heuristics. The

TYPE-based MMKP heuristic has comparable performance to the best legacy heuristic using some computational inexpensive pre-processing to select a heuristic versus running each heuristic and returning the best solution.

6.3 New Greedy Heuristic Version 1 (CH1)

A new greedy heuristic for solving the MMKP is developed. This heuristic first solves a relaxed version of the MMKP to obtain a good initial feasible solution. A greedy heuristic NG V3 (Cho

2005) is used to solve the MDKP problem. A final solution is obtained by using a local improvement phase to improve the initial solution. The subsequent subsections explain the NG V3 heuristic and the implementation of the new greedy heuristic for the MMKP (CH1).

6.3.1 NG V3 Heuristic (Cho 2005)

130 Figure 6.2: Comparison of TYPE-based Heuristic according to Problem Type

Cho (2005) conducted an empirical study for the MDKP and developed three new improved gradient heuristics that exploit the insights of his empirical MDKP study. His NG V1 improves the solution trajectory through the feasible region. NG V2 modifies a delayed constraint weighting scheme to be suitable for any constraint slackness setting. NG V3 is the heuristic combining aspects of NG V1 and

NG V2. NG V3 improves the effective gradient function using a lognormal distribution to improve its response to various combinations of constraint slackness setting and correlation structures. NG

V3 analyzes the problem characteristics and pre-weights the dominant constraint to use resources more effectively starting at the initial iteration. Computational results comparing his new gradient heuristics with a variety of legacy MDKP heuristics showed that NG V3 outperformed all the legacy heuristics considered (Cho 2005). The flowchart for the NG V3 (Cho 2005) heuristic is shown in

Figure 6.3.

Cho (2005) computes the pre-weighting parameters for each of the knapsack constraints by

Equation (6.1).

P reW eight(i) = exp(ρCAi ) × exp(1/ri) (6.1)

131 Figure 6.3: Flow Chart of NG V3 Heuristic (Cho 2005)

132 n X where ri is the surplus resource of each constraint i, ri = aij − bi and ρCAi is the correlation j=1 between the objective and the ith constraint coefficient. Each iteration NG V3 selects an item with the largest effective gradient Gj = cj/vj while maintaining feasibility. The penalty cost function, vj, given by

vj = (Aj · W )/|W | (6.2)

where W is the m-dimensional weight vector given by Equation (6.3) and Aj is the vector of resource costs.

−1 W = Pw · exp(σΦ (Pu)) (6.3)

−1 where Φ is the inverse of normal distribution, vector Pu is the resource usage of the current solution, and Pw is the vector of pre-weights for each knapsack constraint.

6.3.2 CH1 Implementation

CH1, the first version of the new greedy heuristic for the MMKP uses two phases. Phase I generates an initial solution using NG V3 and Phase II improves the solution from Phase I using a local search process.

Phase I

Step 1: Treat the MMKP as the MDKP problem by relaxing the class constraints of the MMKP.

Step 2: Call NG V3 to select an item from the given set of available items.

Step 3: Identify the class to which the selected item belongs. Disable all the other items in this class from selection.

133 Step 4: Repeat Step 2 until no more items available for selection or one item is selected from every class.

The solution obtained at the end of Step 4 is an initial MMKP solution. This solution is the starting solution for the Phase II improvement phase.

Phase II

Step 1: For every class, consider every item to exchange as the selected item. Check the feasibility of the potential solution. If the solution is feasible, check whether the objective function of the new solution is greater than the previous feasible solution.

Step 2: If the new solution is feasible and yields an improvement in the objective function value then swap the new considered item with the old item from the same class (i.e., implement the potential solution).

Step 3: Repeat Steps 1 and 2 for all the items in all the classes until the stopping criteria is satisfied.

The stopping criteria used here is the number of iterations.

The solution obtained at the end of Phase II is the solution returned for the MMKP problem.

6.3.3 Empirical Tests for the CH1 Implementation

The CH1 heuristic approach was tested and compared to the legacy heuristics based on Khan’s benchmark instances and the new extended MMKP test sets. The results are reported below.

Computational Results using Khan’s Test Problem Instances

The CH1 heuristic approach was tested on Khan’s 13 available test problem instances. The number of iterations was set to 100 for the improvement phase. Table 6.4 tabulates the solution quality and computational time in milliseconds for CH1 and the legacy heuristics considered. The solution quality of CH1 is better than the legacy heuristics although the computational time is higher. Ta- ble 6.5 summarizes the competitive results on the same test instances. CH1 yields better solutions

134 than the other heuristics. Table 6.6 records the average percentage relative error for each of these heuristics. These results indicate that CH1 yields better average relative errors compared to legacy heuristics across the full range of these available problems.

CH1 performance was compared to legacy heuristic performance on the additional 24 test problem instances generated via Khan et al.’s (2002) procedure. Tables 6.7 and 6.8 summarize these results. Table 6.7 indicates only CPCCP and CH1 always found feasible solutions. This is an important characteristic for an effective heuristic. The solution times for CH1 were the lowest among all heuristics admittedly due to the limit on local improvement iterations. Table 6.8 results indicate MOSER, HEU, and CH1 are comparable in terms of yielding a best solution. However, a closer look at the Table 6.7 results indicate that when MOSER or HEU yielded a better solution, the improvement was generally small, required significantly more time, and came from a heuristic that could fail to find a feasible solution. Overall, CH1 fared quite well.

Computational Results using New MMKP Test Sets

The new extended MMKP test sets were also used to test CH1 performance against legacy heuristic performance. There are 9 files with 30 problems in each file, a total of 270 problem instances. The results are tabulated in Tables 6.9 and 6.10.

Figures 6.4 and 6.5 give a pictorial comparison of the heuristic performances based on number of problems solved to optimal and number of times each yielded the best solution on the new MMKP test sets, respectively. The results indicate that the CH1 heuristic solves the problems to optimal the most often in all the test sets except MMKP09. As the problem size increases, the number of problems solved to optimal decreases. MOSER performs the best for MMKP01, MMKP02 and

MMKP03 while CH1 outperforms all legacy heuristics for MMKP04 to MMKP09. The number of times best by CH1 increases as the size of the problem increases. The average percentage relative error for CH1 is the least among all the heuristics. As the problem size increases, the average relative percentage error for CH1 decreases which is an important behavior of a heuristic particularly

135 Table 6.4: Solution Quality for CH1 on Khan’s 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CP CCP Time CH1 (Initial) CH1 (Final) Time I01 173 - - 154 62 133 159 47 149 167 15 I02 364 294 125 354 109 305 312 78 311 332 47 I03 1602 1127 297 1518 453 1317 1407 172 1331 1509 79 I04 3597 2906 625 3297 562 3209 3322 234 3163 3369 172 I05 3905.7 1068.3 1062 3894.5 1265 3695.7 3889.9 234 3699 3905 219 I06 4799.3 1999.5 1657 4788.2 1219 4527.7 4723.1 360 4426 4689 281

136 I07 24587* 20833 20594 - - 21166 23237 1516 19002 23529 2531 I08 36877* 31643 59907 34338 8453 31651 35403 3266 28784 35691 5953 I09 49167* - - - - 41960 47154 5562 38431 47687 9968 I10 61437* - - - - 52737 58990 7547 48613 59703 15484 I11 73773* - - - - 63511 70685 9359 58574 71761 22922 I12 86071* - - - - 74220 82754 12141 67889 83701 30781 I13 98429* - - - - 84665 94465 22031 77214 95432 41765 - Solution not found * LP Bound Time in ms Table 6.5: Performance Summary of the CH1 with Legacy Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP CH1

137 MOSER 0 0 - 1 0 0 HEU 3 0 7 - 4 3 CPCCP 0 0 13 9 - 1 CH1 10 0 13 10 12 - 13 total problems Table 6.6: Percentage Relative Error by CH1 and each Legacy Heuristic for the Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP CH1 I01 - 10.98 8.09 3.47 I02 19.23 2.75 14.29 8.79 I03 29.65 5.24 12.17 5.81 I04 19.21 8.34 7.65 6.34 I05 72.65 0.29 0.40 0.02 I06 58.34 0.23 1.59 2.30 138 I07 15.27 - 5.49 4.30 I08 14.19 6.89 4.00 3.22 I09 - - 4.09 3.01 I10 - - 3.98 2.82 I11 - - 4.19 2.73 I12 - - 3.85 2.75 I13 - - 4.03 3.04 Total Average 32.65 4.96 5.68 3.74 - Relative Error cannot be calculated since no solution found Table 6.7: Solution Quality for CH1 on Additional Test Problems generated via Khan’s Approach

Prob No Exact MOSER Time HEU Time CPCCP Time CH1 (Initial) CH1 (Final) Time 1 1832 1832 203 - - 1832 16 1723 1832 31 2 2147 - - 2027 219 2147 16 2147 2147 16 3 2020 1877 203 - - 1863 16 1576 1872 47 4 3993 3913 103 - - 3932 47 3475 3727 63 5 4326 4282 235 4241 579 4282 62 3947 4326 62 6 4259 4256 313 4044 704 4259 515 3124 4259 31 7 6335 6275 750 6275 390 6275 46 5291 6275 31 8 6606 6295 671 6420 797 6250 468 5310 6382 141 9 6662 6536 703 6437 759 6495 125 5589 6491 141 10 8360 8034 360 8267 766 8253 110 7215 8253 125 11 8799 8625 531 8774 781 8673 109 7992 8738 188 139 12 8748 8467 391 8642 969 8475 172 7654 8374 187 13 9746 9745 688 9655 875 9723 250 8796 9723 94 14 10781 10647 406 10731 984 10276 219 8660 10295 125 15 11011 9939 687 10868 891 10787 266 9231 10659 187 16 12509 12509 62 12403 812 12509 578 11166 12509 156 17 12963 12756 516 12883 1047 12700 609 11066 12766 188 18 13310 13074 1547 13278 766 12971 640 11199 12994 234 19 14508 14431 890 - - 14298 234 12572 14298 172 20 15438 - - 15427 1156 15051 672 13035 15163 343 21 15539 15439 1484 15513 1437 15454 781 13362 15454 297 22 16636 16476 750 16556 641 16439 234 14155 16439 234 23 17353 17228 1406 17276 875 16793 797 15003 17091 313 24 18010 - - 17907 1609 17737 719 14121 17824 406 - Solution not found Time in ms Table 6.8: Performance Summary of the CH1 with Legacy Heuristics on Additional Test Problems generated via Khan’s Approach

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP CH1

140 MOSER 4 2 - 9 10 10 HEU 13 0 14 - 13 13 CPCCP 1 4 11 10 - 4 CH1 1 5 11 10 10 - 24 total problems Table 6.9: Performance Summary of the CH1 with Legacy Heuristics on New MMKP Test Sets

No. problems solved to optimal No. of times best Problem File MOSER HEU CPCCP CH1 MOSER HEU CPCCP CH1 MMKP01 12 2 11 15 6 2 1 6 MMKP02 7 2 8 8 12 3 4 7 MMKP03 10 3 8 10 7 4 4 3 141 MMKP04 5 0 4 4 5 0 4 12 MMKP05 5 0 3 4 11 1 1 13 MMKP06 5 0 3 3 10 0 5 12 MMKP07 1 0 2 4 6 0 8 14 MMKP08 1 0 1 1 1 0 5 22 MMKP09 0 0 0 0 3 0 1 26 30 problems in each file Table 6.10: Percentage Relative Error by CH1 and each Legacy Heuristic for the New MMKP Test Sets

Problem File MOSER HEU* CPCCP CH1 MMKP01 3.61 4.5925 3.49 1.95 MMKP02 2.97 4.3120 3.73 3.12 MMKP03 2.71 2.5821 3.30 2.74 MMKP04 13.41 30.2829 8.77 5.01

142 MMKP05 4.42 6.3328 6.20 3.77 MMKP06 3.84 10.4429 4.94 3.19 MMKP07 10.1812 - 11.98 8.04 MMKP08 11.68 - 10.31 3.16 MMKP09 6.91 10.1629 8.57 2.89 Total Average 6.64 9.84 6.81 3.76 - Relative Error cannot be calculated since no solution found * Heuristic fails to find solutions for the superscripted number of problems Figure 6.4: Comparison of CH1 based on Number of Problems Solved to Optimal on New MMKP Test Sets

in generalizing the heuristic to solve actual instances, since these instances are typically the large problems.

Table 6.11 gives the computational time in milliseconds for each of the heuristics. As the problem size increases, the average computational time increases. The computational time for CH1 is somewhat higher in comparison to HEU and CPCCP, but CH1 gives improved performance.

6.4 New Greedy Heuristic Version 2 (CH2)

CH2 extends the CH1 heuristic providing a more rigorous search in the improvement phase. The implementation of CH2 is discussed below.

6.4.1 CH2 Implementation

The first phase of CH2 is the same as that of CH1. The improvement phase starts with a single swap; exchanging items within a class so long as the new solution is feasible and gives an improved solution. If there is no improvement in the solution quality for a certain number of iterations, the

143 Table 6.11: Computational Time in milliseconds by CH1 and each Legacy Heuristic for the New MMKP Test Sets

Problem File MOSER HEU CPCCP CH1 MMKP01 11.97 9.60 5.20 19.27 MMKP02 38.57 29.60 7.33 43.17 MMKP03 230.67 123.00 15.10 178.60 MMKP04 17.89 15.00 7.83 22.53 144 MMKP05 45.87 39.50 7.73 53.10 MMKP06 332.90 109.00 17.73 228.27 MMKP07 27.31 - 7.83 28.73 MMKP08 77.00 - 8.87 77.10 MMKP09 575.53 140.00 24.53 373.00 Total Average 150.86 66.53 11.35 113.75 - Computational Time cannot be calculated since no solution found Figure 6.5: Comparison of CH1 based on Number of Times Equal to Best on New MMKP Test Sets

search is diversified by considering exchanges involving two classes simultaneously. This dual exchange continues until all class combinations are considered.

Phase II

Step 1: For every class consider every item for an exchange. Check the feasibility of the potential solution. If the solution is feasible, check whether the objective function of new solution is greater than the previous feasible solution.

Step 2: If the new solution is feasible and yields an improvement in the objective function value, then swap the new considered item with the old item from the same class (i.e., implement the potential solution).

Step 3: If the solution does not improve for a certain fixed number of iterations piter, then call

DoubleSwap().

Step 4: Repeat Steps 1 and 2 for all the items in all the classes until the stopping criteria is satisfied.

The stopping criteria used here is the number of iterations.

DoubleSwap()

145 Step 1: Consider two classes at a time. Consider exchanging a pair of items from each of the selected classes.

Step 2: If the dual exchange yields a feasible and a better solution than the current solution, perform the swap between the items of the selected classes. Update this solution as the current solution.

Step 3: Repeat Steps 1 and 2 until all the combinations of classes and all the items in these selected classes have been considered, then stop.

The solution obtained at the end of Phase II is the final solution for the MMKP problem.

6.4.2 Empirical Tests for the CH2 Implementation

The CH2 heuristic approach was tested and compared to the legacy heuristic approaches based on

Khan’s available instances, the 24 additional problems generated, and the new MMKP test sets.

The piter was set to 5 and the number of iterations for the stopping condition of the Phase II improvement phase was set to 10.

Computational Results using Khan’s Test Problem Instances

The computational results comparing CH2 with CH1 and legacy heuristics on Khan’s 13 available test problem instances are reported in Tables 6.12 and 6.13. The results indicate that CH2 has a higher computational time but outperforms the other heuristics. Table 6.14 indicates that the average relative error of CH2 is smaller than the legacy heuristics and CH1. Tables 6.15 and 6.16 summarize the CH2 results on the additional 24 test problem instances. Again the computational time for CH2 is higher but CH2 outperforms all the heuristics under study. This is important since these first two sets of problems represent the benchmark currently in use and CH2 performs the best.

Computational Results using New MMKP Test Sets

The computational results of all heuristics on the new MMKP test sets are reported in Tables 6.17 and 6.18. Figures 6.6 and 6.7 plot the number of problems solved to optimal and number of times

146 Table 6.12: Solution Quality for CH2 on Khan’s 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CCP Time CH1 Time CH2 Time I01 173 - - 154 62 159 47 167 15 173 31 I02 364 294 125 354 109 312 78 332 47 364 109 I03 1602 1127 297 1518 453 1407 172 1509 79 1572 797 I04 3597 2906 625 3297 562 3322 234 3369 172 3461 1609 I05 3905.7 1068.3 1062 3894.5 1265 3889.9 234 3905 219 3905 3360 I06 4799.3 1999.5 1657 4788.2 1219 4723.1 360 4689 281 4799 5954

147 I07 24587* 20833 20594 - - 23237 1516 23529 2531 23711 30703 I08 36877* 31643 59907 34338 8453 35403 3266 35691 5953 35816 109328 I09 49167* - - - - 47154 5562 47687 9968 47647 277782 I10 61437* - - - - 58990 7547 59703 15484 59351 741187 I11 73773* - - - - 70685 9359 71761 22922 71405 749297 I12 86071* - - - - 82754 12141 83701 30781 83174 1254937 I13 98429* - - - - 94465 22031 95432 41765 94934 2144750 - Solution not found * LP Bound Time in ms Table 6.13: Performance Summary of All Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP CH1 CH2 MOSER 0 0 - 1 0 0 0 148 HEU 0 0 7 - 4 3 0 CPCCP 0 0 13 9 - 1 0 CH1 5 0 13 10 12 - 5 CH2 7 2 13 13 13 7 - 13 total problems Table 6.14: Percentage Relative Error of All Heuristics on Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP CH1 CH2 I01 - 10.98 8.09 3.47 0.00 I02 19.23 2.75 14.29 8.79 0.00 I03 29.65 5.24 12.17 5.81 1.87 I04 19.21 8.34 7.65 6.34 3.78 I05 72.65 0.29 0.40 0.02 0.02 I06 58.34 0.23 1.59 2.30 0.01 149 I07 15.27 - 5.49 4.30 3.56 I08 14.19 6.89 4.00 3.22 2.88 I09 - - 4.09 3.01 3.09 I10 - - 3.98 2.82 3.40 I11 - - 4.19 2.73 3.21 I12 - - 3.85 2.75 3.37 I13 - - 4.03 3.04 3.55 Total Average 32.65 4.96 5.68 3.74 2.21 - Relative Error cannot be calculated since no solution found Table 6.15: Solution Quality of All Heuristics on Additional Test Problems

Prob No Exact MOSER Time HEU Time CPCCP Time CH1 Time CH2 Time 1 1832 1832 203 - - 1832 16 1832 31 1832 47 2 2147 - - 2027 219 2147 16 2147 16 2147 62 3 2020 1877 203 - - 1863 16 1872 47 1919 78 4 3993 3913 103 - - 3932 47 3727 63 3982 141 5 4326 4282 235 4241 579 4282 62 4326 62 4326 219 6 4259 4256 313 4044 704 4259 515 4259 31 4259 266 7 6335 6275 750 6275 390 6275 46 6275 31 6269 250 8 6606 6295 671 6420 797 6250 468 6382 141 6543 391 9 6662 6536 703 6437 759 6495 125 6491 141 6662 610 10 8360 8034 360 8267 766 8253 110 8253 125 8290 438 11 8799 8625 531 8774 781 8673 109 8738 188 8774 781 150 12 8748 8467 391 8642 969 8475 172 8374 187 8745 1312 13 9746 9745 688 9655 875 9723 250 9723 94 9742 719 14 10781 10647 406 10731 984 10276 219 10295 125 10763 1406 15 11011 9939 687 10868 891 10787 266 10659 187 10979 2328 16 12509 12509 62 12403 812 12509 578 12509 156 12501 1078 17 12963 12756 516 12883 1047 12700 609 12766 188 12951 2312 18 13310 13074 1547 13278 766 12971 640 12994 234 13310 4203 19 14508 14431 890 - - 14298 234 14298 172 14506 1641 20 15438 - - 15427 1156 15051 672 15163 343 15422 3578 21 15539 15439 1484 15513 1437 15454 781 15454 297 15524 6454 22 16636 16476 750 16556 641 16439 234 16439 234 16633 2203 23 17353 17228 1406 17276 875 16793 797 17091 313 17307 5062 24 18010 - - 17907 1609 17737 719 17824 406 17959 10016 - Solution not found Time in ms Table 6.16: Performance Summary of the CH2 against Legacy Heuristics on Additional Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP CH1 CH2 MOSER 1 2 - 9 10 10 3 151 HEU 1 0 14 - 13 13 2 CPCCP 0 4 11 10 - 4 2 CH1 0 5 11 10 10 - 2 CH2 15 6 20 21 19 18 - 24 total problems Figure 6.6: Comparison of All Heuristics based on Number of Problems Solved to Optimal on New MMKP Test Sets

equal to best using the new MMKP test sets, respectively. These results clearly depict the superiority of CH2; it yields the most instances of returning an optimal solution and quite significantly yields better results than all other heuristics. Table 6.19 summarizes computational time. The aggressive local search of CH2 equates to larger computational times but given the times are in milliseconds, the times are reasonable. Tables 6.20 and 6.21 summarize the percentage of optimal solution values attained by each heuristic for all the test problem instances employed; CH2 is a clear winner based on these results. These results are particularly significant. The new MMKP test set better generalizes to cover actual problems. The dominant performance of CH2 means its performance on actual problems will be much better than the performance of currently available greedy heuristics.

6.5 Summary

The legacy heuristic approaches MOSER, HEU, and CPCCP are greedy approaches developed to solve the MMKP. These approaches try to obtain an initial solution and then use an improvement phase to try and improve the solution. Poor initial solutions and poorly developed improvement processes limit these legacy approaches.

152 Table 6.17: Performance Summary of the CH2 with Legacy Heuristics on New MMKP Test Sets

No. problems solved to optimal No. of times best Problem File MOSER HEU CPCCP CH1 CH2 MOSER HEU CPCCP CH1 CH2 MMKP01 12 2 11 15 17 2 1 0 0 8 MMKP02 7 2 8 8 17 1 1 1 1 16 MMKP03 10 3 8 10 13 0 0 0 0 19 153 MMKP04 5 0 4 4 18 1 0 0 1 18 MMKP05 5 0 3 4 15 1 0 0 0 23 MMKP06 5 0 3 3 8 1 0 0 0 25 MMKP07 1 0 2 4 11 1 0 2 0 22 MMKP08 1 0 1 1 5 0 0 0 3 23 MMKP09 0 0 0 0 2 0 0 1 1 28 30 problems in each file Table 6.18: Percentage Relative Error of All Heuristics for the New MMKP Test Sets

Problem File MOSER HEU* CPCCP CH1 CH2 MMKP01 3.61 4.5925 3.49 1.95 0.49 MMKP02 2.97 4.3120 3.73 3.12 0.26 MMKP03 2.71 2.5821 3.30 2.74 0.26 MMKP04 13.41 30.2829 8.77 5.01 0.79

154 MMKP05 4.42 6.3328 6.20 3.77 0.38 MMKP06 3.84 10.4429 4.94 3.19 0.45 MMKP07 10.1812 - 11.98 8.04 1.78 MMKP08 11.68 - 10.31 3.16 1.33 MMKP09 6.91 10.1629 8.57 2.89 0.88 Total Average 6.64 9.84 6.81 3.76 0.73 - Relative Error cannot be calculated since no solution found * Heuristic fails to find solutions for the superscripted number of problems Figure 6.7: Comparison of All Heuristics based on Number of Times Equal to Best on New MMKP Test Sets

Figure 6.8: Comparison of All Heuristics based on Percentage Relative Error on New MMKP Test Sets

155 Table 6.19: Computational Time in milliseconds of All Heuristics on New MMKP Test Sets

Problem File MOSER HEU CPCCP CH1 CH2 MMKP01 11.97 9.60 5.20 19.27 48.43 MMKP02 38.57 29.60 7.33 43.17 241.67 MMKP03 230.67 123.00 15.10 178.60 3317.60 MMKP04 17.89 15.00 7.83 22.53 52.60 156 MMKP05 45.87 39.50 7.73 53.10 306.20 MMKP06 332.90 109.00 17.73 228.27 4217.70 MMKP07 27.31 - 7.83 28.73 68.37 MMKP08 77.00 - 8.87 77.10 464.13 MMKP09 575.53 140.00 24.53 373.00 8871.30 Total Average 150.86 66.53 11.35 113.75 1954.22 - Computational Time cannot be calculated since no solution found Table 6.20: Comparison of All Heuristics based on Percentage of Optimum on the Khan’s MMKP Test Sets

Heuristic Khans 13 Test Problems Additional Test Problems MOSER 67.35 97.90 HEU 95.04 98.58 CPCCP 94.32 98.04 CH1 96.26 98.06 CH2 97.79 99.59

This chapter developed and examined three new greedy heuristic approaches - TYPE, CH1, and CH2 for solving the MMKP. A TYPE heuristic was designed to pick a likely best performer by pre-processing a problem based on insights of the empirical study of these legacy heuristics. The

CH1 and CH2 approaches were developed to find good initial solutions using the Cho (2005) NG

V3 approach. This better initial solution was used as a starting solution for an improvement phase to find a final solution. The CH1 and CH2 approaches varied in their improvement phases. CH2 was the more aggressive, additionally considering two swaps of items from two different classes at the same time. Testing and comparing these greedy heuristic approaches on available and new

MMKP test problem sets yielded solutions with smaller average percentage relative error (smaller values indicate better performance). Each heuristic outperforms legacy heuristics. Results from this chapter are published in Hiremath and Hill (2007).

157 Table 6.21: Comparison of All Heuristics based on Percentage of Optimum on the New MMKP Test Sets

Problem File MOSER HEU CPCCP CH1 CH2 MMKP01 96.39 95.41 96.51 98.05 99.51 MMKP02 97.03 95.69 96.27 96.88 99.74 MMKP03 97.29 97.42 96.70 97.26 99.74 MMKP04 92.78 69.72 91.23 94.99 99.21 158 MMKP05 95.58 93.67 93.80 96.23 99.62 MMKP06 96.16 89.36 95.06 96.81 99.55 MMKP07 89.82 - 88.02 91.96 98.22 MMKP08 91.36 - 89.69 96.84 98.67 MMKP09 93.09 89.84 91.43 97.11 99.12 Average 94.39 90.16 93.19 96.24 99.27 - Percentage Optimum cannot be calculated since no solution found 7. Metaheuristic Solution Procedure for the MMKP

7.1 Introduction

Metaheuristics are approximate algorithms that combine basic heuristic methods in higher level frameworks to efficiently and effectively explore the search space (Blum and Roli 2003). Meta- heuristics, a term coined by Glover in 1986, is derived from the two Greek words; heuristics which means “to find” and meta which means “beyond”. Various heuristic approaches mostly based on analogies with natural phenomena, like tabu search, genetic algorithms, and simulated annealing were developed in the 1970s. Metaheuristics have been used to tackle many difficult combinato- rial optimization problems. A good metaheuristic implementation is likely to solve a combinatorial optimization problem in a reasonable computation time (Gendreau and Potvin 2005) and empirical evidence suggests the solutions are generally of high quality.

This chapter presents tabu search (TS) approaches for solving the MMKP. The following sec- tions provide a brief overview of a search neighborhood concept and TS followed by First-Level

TS (FLTS) for solving the MMKP. This chapter then extends FLTS to a Sequential Fan Candidate

List implementation (FanTabu) and the CPCCP Fan Candidate List implementation (CCFT) (Hifi et al. 2004). Subsequent sections present the computational results and empirical analyses of the

TS approaches in comparison to the legacy heuristics based on the standard and generated MMKP test problem instances. The last section of this chapter compares the FLTS and its extensions to the

159 Reactive Local Search (RLS) approach, which Hifi et al. (2006) note is an extension of CPCCP.

This work represents the first true use of TS for solving the MMKP.

7.2 Concept of a Search Neighborhood

A neighborhood structure is a function that assigns to every s ∈ S a set of neighbors N(s) ⊆ S.

N(s) is called the neighborhood of s. The concept of neighborhood structure helps define the concept of locally minimal solutions. A locally minimal solution, or a local minimum with respect to a neighborhood structure N, is a solution s∗ such that ∀s ∈ N(s∗): f(s∗) ≤ f(s).

A neighborhood search iteratively obtains a sequence of feasible solutions (x1, x2, ..., xk). For

the kth iteration, the search algorithm determines a solution xk+1 with a lower objective function

value than xk, if one exists. When the algorithm fails to find an improving solution among its

neighbors, it terminates. This solution is called the locally optimal solution. Multiple runs are

performed using different starting points, and finally the best locally optimal solution is selected.

Ahuja et al. (2002) summarize the neighborhood search algorithm as follows; for a minimization application:

Step 1: Obtain an initial feasible solution x1 to the problem.

Step 2: Initialize the number of iterations k := 1. Repeat Step 3 while there is a neighbor x ∈ N(xk)

with c(x) < c(xk) (lower objective function value).

Step 3: Set k := k + 1 and xk := x.

Step 4: Return the local optimal solution xk.

A local improvement algorithm is a heuristic algorithm that starts with a feasible solution and

tries to obtain a better solution through neighborhood search iterations (Ahuja et al. 2002). Neigh- borhood search algorithms also known as local search (LS) or steepest descent algorithms are a wide class of improvement algorithms that find an improving solution by searching the neighbor- hood of the current solution each iteration. The structure of the neighborhood is a critical issue.

160 The size of the neighborhood governs the quality of the local optima. The larger the neighborhood the better the quality of the local solution and, generally, the better the final solution obtained. As the size of the neighborhood increases the time taken for the search also increases. This is because one has to perform many restarts of a neighborhood search algorithm starting from different points which takes a longer execution time per iteration leading to fewer runs per unit time. Hence, a large neighborhood can produce a more effective heuristic, if one could search the large neighborhood in a very efficient manner.

The important limitation of the local search (LS) algorithm is that it terminates on encountering a local optimum which can be a solution of globally poor quality. Strategies are required to prevent the search from getting trapped in these local minima and escape from them (Roli 2005). Changing the neighborhood structure during the search process is one strategy for escaping from local optima regions. Such a change involves a set of neighborhood structures which gives the possibility to di- versify a search into new search regions. TS and Variable Neighborhood Search methods explicitly deal with such dynamic neighborhoods with the goal of efficiently and effectively searching the entire search space using systematic processes that intensify and diversify the search while escaping from regions of local optimality.

7.3 First-Level Tabu Search (FLTS) for the MMKP

7.3.1 FLTS Implementation

The FLTS proposed for solving the MMKP uses a “tabu” structure of the most recent iteration. This is known as recency-based memory. The terminologies used in the search implementation of this

FLTS are discussed below:

(i) The starting solution involves picking the smallest profit item from each class. The selected

items from each class and the total profit are considered attributes in the search process.

161 (ii) A move is a complement of a binary variable and denotes either a selection or a removal of

an item in a class. If the change is from 0 to 1, an item is selected and if the change is from 1

to 0, an item is deselected.

(iii) A neighborhood of a solution is the set of all possible moves that can be made from that

solution. The size of the neighborhood is m(n − 1) for the FLTS implemented.

(iv) The move evaluation depends on the feasibility of the solutions among all possible moves

in a single neighborhood and the best move is always chosen. The entire neighborhood is

searched before a move is made.

(v) Number of iterations is used as the stopping criteria. A total of 100 iterations were used for

smaller problems, 500 iterations for larger problems (those having more than 200 classes).

(vi) FLTS uses information from the previous solution visited. Recency-based memory is used

in this case. The variable changed in the previous iteration is not considered for the next

iteration.

(vii) The tabu list records the attributes of the previously visited solutions to avoid revisiting solu-

tions and getting trapped in the local optimum; it maintains a record of all the moves made.

The FLTS is implemented using the following steps:

Step 1: Initial Solution - The algorithm starts by picking the smallest profit item from each class.

This initial solution is either feasible or infeasible. The attributes of this solution are recorded.

The number of iterations is set based on the problem size.

Step 2: Find a Neighborhood - The initial solution is used as a starting point of the search process.

The iteration count is initialized to 1 at the beginning of this step. The neighborhood of the solution

is evaluated considering all moves. Only feasible moves are considered. Among all the feasible

moves generated, the best move is selected and becomes the new solution used for generating the

next neighborhood. If all the moves are infeasible go to Step 5.

162 Step 3: Update Solution - Update the current solution with the the best move solution. The tabu list is updated. Items selected in this move are not considered in the next neighborhood computation.

These items have a tabu tenure of 1. Repeat Step 2 until the stopping criteria is met.

Step 4: Compute Solution - Traverse the tabu list and search for the best solution. This is the

returned solution for the MMKP. Stop the algorithm.

Step 5: Random Solution - Generate a random solution. Randomly selecting an item from every

class and restart the search procedure. Go to Step 2.

7.3.2 Empirical Tests for the FLTS Implementation

Empirical tests for the FLTS Implementation involved using Khan’s test problem instances, the

additional 24 instances generated using Khan’s approach, and the new MMKP Test Sets. The com-

putational results and a comparison with the legacy heuristics are summarized below.

Computational Results using Khan’s Test Problem Instances

The FLTS was tested using Khan’s 13 available test problem instances. The problems from I01 to

I06 are smaller sized problems while I07 to I13 are the larger problems. The number of iterations

was set to 100 for I01 to I09, and 500 for I10 to I13. Table 7.1 compares the solution quality of the

FLTS with other legacy heuristics. The results indicate that the computational time for the FLTS is higher since as the problem size increases the number of iterations used is also higher. Table 7.2 provides a summary of the competitive performance of these heuristics. The results indicate that

FLTS outperforms the legacy heuristics. The percentage relative error for each of these heuristics is indicated in Table 7.3. The average percentage relative error for the FLTS is the lowest and the approach always returned a feasible solution.

The FLTS and the legacy heuristics were also compared using the additional 24 test problem instances. The number of iterations was set to 100. The results are tabulated in Tables 7.4 and 7.5.

163 The results are similar to the results obtained on Khan’s 13 available test problem instances. The results in Table 7.4 indicate that FLTS and CPCCP find solutions for all the test problem instances unlike MOSER and HEU. CPCCP is the fastest amongst all the heuristics. The computational time for FLTS is competitive with the times of the legacy heuristics. Table 7.5 results are not conclusive in terms of a best heuristic although FLTS has a slight advantage and does always return a feasible solution.

Computational Results using New MMKP Test Sets

The FLTS and the legacy heuristics were tested and compared using the new test problem sets. The number of iterations was set to 50 for the FLTS. There were 9 files of 30 problems in each file.

These heuristics were tested on a total of 270 test problem instances. The results are tabulated in

Tables 7.6 and 7.7.

Figures 7.1 and 7.2 graph the comparison of the heuristic performance using number of prob- lems solved to optimal and number of times yielding the best on the new MMKP test sets, respec- tively. The FLTS does the best job of finding optimal solutions. As the problem size increases, the number of problems solved to optimal decreases. MOSER is quite competitive to the FLTS although not in terms of yielding a best solution.

The overall average relative errors are similar for MOSER, CPCCP and FLTS, but the FLTS displays more consistent performance. In general, for the set of problems with a fixed number of knapsacks and fixed number of items, the average relative percentage error decreases as the number of classes increases. For the set of problems with fixed number of classes and fixed number of items, the average relative percentage error increases as the number of knapsacks increases.

Table 7.8 records the computational time in milliseconds for each of the heuristics. The CPCCP takes the least computational time. As the problem size increases, so does the average computational time. The computational time for the FLTS is higher in comparison to the legacy heuristics. This is due to the higher number of iterations allowed, but FLTS remains competitive. Overall, the FLTS is

164 Table 7.1: Solution Quality for FLTS on Khan’s 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CPCCP Time FLTS Time I01 173 - - 154 62 159 47 158 31 I02 364 294 125 354 109 312 78 351 47 I03 1602 1127 297 1518 453 1407 172 1445 172 I04 3597 2906 625 3297 562 3322 234 3350 250 I05 3905.7 1068.3 1062 3894.5 1265 3889.9 234 3905.7 609 I06 4799.3 1999.5 1657 4788.2 1219 4723.1 360 4793 860

165 I07 24587* 20833 20594 - - 23237 1516 23547 10219 I08 36877* 31643 59907 34338 8453 35403 3266 35487 33781 I09 49167* - - - - 47154 5562 47107 71422 I10 61437* - - - - 58990 7547 59152 147312 I11 73773* - - - - 70685 9359 70868 246984 I12 86071* - - - - 82754 12141 82716 558859 I13 98429* - - - - 94465 22031 91551 1624031 - Solution not found * LP Bound Time in ms Table 7.2: Performance Summary of the FLTS with Legacy Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP FLTS

166 MOSER 0 0 - 1 0 0 HEU 2 0 7 - 4 2 CPCCP 4 0 13 9 - 4 FLTS 7 1 13 11 9 - 13 total problems Table 7.3: Percentage Relative Error by FLTS and each Legacy Heuristic for the Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP FLTS I01 - 10.98 8.09 8.67 I02 19.23 2.75 14.29 3.57 I03 29.65 5.24 12.17 9.80 I04 19.21 8.34 7.65 6.87 I05 72.65 0.29 0.40 0.00 I06 58.34 0.23 1.59 0.13 167 I07 15.27 - 5.49 4.38 I08 14.19 6.89 4.00 3.77 I09 - - 4.09 4.19 I10 - - 3.98 3.72 I11 - - 4.19 3.94 I12 - - 3.85 3.90 I13 - - 4.03 6.99 Total Average 32.65 4.96 5.68 4.61 - Relative Error cannot be calculated since no solution found Table 7.4: Solution Quality for FLTS on 24 Additional Test Problems

Prob No Exact MOSER Time HEU Time CPCCP Time FLTS Time 1 1832 1832 203 - - 1832 16 1698 109 2 2147 - - 2027 219 2147 16 2027 109 3 2020 1877 203 - - 1863 16 1919 125 4 3993 3913 103 - - 3932 47 3908 157 5 4326 4282 235 4241 579 4282 62 3916 203 6 4259 4256 313 4044 704 4259 515 4259 265 7 6335 6275 750 6275 390 6275 46 6157 359 8 6606 6295 671 6420 797 6250 468 6606 688 9 6662 6536 703 6437 759 6495 125 6049 691 10 8360 8034 360 8267 766 8253 110 8274 500 11 8799 8625 531 8774 781 8673 109 8799 875 168 12 8748 8467 391 8642 969 8475 172 8748 1297 13 9746 9745 688 9655 875 9723 250 9717 828 14 10781 10647 406 10731 984 10276 219 10550 734 15 11011 9939 687 10868 891 10787 266 10318 937 16 12509 12509 62 12403 812 12509 578 12509 1313 17 12963 12756 516 12883 1047 12700 609 12921 1531 18 13310 13074 1547 13278 766 12971 640 13143 1859 19 14508 14431 890 - - 14298 234 14450 734 20 15438 - - 15427 1156 15051 672 15172 1453 21 15539 15439 1484 15513 1437 15454 781 15448 2281 22 16636 16476 750 16556 641 16439 234 16621 1047 23 17353 17228 1406 17276 875 16793 797 17009 2266 24 18010 - - 17907 1609 17737 719 17406 2937 - Solution not found Time in ms Table 7.5: Performance Summary of the FLTS with Legacy Heuristics on 24 Additional Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP FLTS

169 MOSER 2 2 - 9 10 8 HEU 7 0 14 - 13 10 CPCCP 2 4 15 10 - 10 FLTS 8 5 15 13 12 - 24 total problems Table 7.6: Performance Summary of the FLTS with Legacy Heuristics on New MMKP Test Sets

No. problems solved to optimal No. of times best Problem File MOSER HEU CPCCP FLTS MOSER HEU CPCCP FLTS MMKP01 12 2 11 16 6 2 4 5 MMKP02 7 2 8 11 12 4 6 5 MMKP03 10 3 8 11 2 5 2 10 170 MMKP04 5 0 4 5 7 0 7 11 MMKP05 5 0 3 7 5 1 2 15 MMKP06 5 0 3 6 5 0 5 14 MMKP07 1 0 2 4 6 0 7 14 MMKP08 1 0 1 1 7 0 6 16 MMKP09 0 0 0 0 12 0 4 14 30 problems in each file Table 7.7: Percentage Relative Error by FLTS and each Legacy Heuristic for the New MMKP Test Sets

Problem File MOSER HEU* CPCCP FLTS MMKP01 3.61 4.5925 3.49 3.32 MMKP02 2.97 4.3120 3.73 3.48 MMKP03 2.71 2.5821 3.30 2.78 MMKP04 13.41 30.2829 8.77 6.65

171 MMKP05 4.42 6.3328 6.20 3.21 MMKP06 3.84 10.4429 4.94 3.14 MMKP07 10.1812 - 11.98 8.15 MMKP08 11.68 - 10.31 6.91 MMKP09 6.91 10.1629 8.57 8.06 Total Average 6.64 9.84 6.81 5.08 - Relative Error cannot be calculated since no solution found * Heuristic fails to find solutions for the superscripted number of problems Figure 7.1: Comparison of FLTS based on Number of Problems Solved to Optimal on New MMKP Test Sets

the preferred heuristic particularly as the problems get larger.

7.3.3 Extensions of the FLTS for the MMKP

The FLTS is a basic TS approach. This TS concept can be extended by adding additional short- term memory structures. One approach is to force the basic TS to intensify and diversify the search into other regions of the search space. Sequential Fan Candidate List and the CPCCP with Fan

Candidate List are the two extensions considered in this research. These approaches are discussed in the following section.

7.4 Sequential Fan Candidate List (FanTabu) for the MMKP

The Sequential Fan Candidate List (FanTabu) is built on the Elite Candidate List approach (Glover and Laguna 1997) which creates a Master List by examining all the moves and selecting some number of best moves encountered. The Sequential Fan Candidate List is highly exploitable by parallel processing, but parallel algorithms are not considered in the current work. The basic idea

172 Table 7.8: Computational Time in milliseconds by FLTS and each Legacy Heuristic on the New MMKP Test Sets

Problem File MOSER HEU CPCCP FLTS MMKP01 11.97 9.60 5.20 36.97 MMKP02 38.57 29.60 7.33 89.63 MMKP03 230.67 123.00 15.10 384.87 MMKP04 17.89 15.00 7.83 39.53 173 MMKP05 45.87 39.50 7.73 94.77 MMKP06 332.90 109.00 17.73 388.60 MMKP07 27.31 - 7.83 334.97 MMKP08 77.00 - 8.87 740.87 MMKP09 575.53 140.00 24.53 24.53 Total Average 150.86 66.53 11.35 237.19 - Computational Time cannot be calculated since no solution found Figure 7.2: Comparison of FLTS based on Number of Times Equal to Best on New MMKP Test Sets

is to generate p best alternative moves at a given step and then create a fan of solution streams, one stream for each alternative move. These several best available moves for each stream are examined and only the p best moves provide the p new streams at the next iteration. In tree search methods, such a sequential fanning process is called a beam search (Glover and Laguna 1997). Conceptually, the FanTabu approach used in this research is depicted in Figure 7.3. The algorithm starts with an initial solution and generates a best alternative move using the FLTS. This solution is updated as the current solution which in turn is used for computing the next solution. FLTS is used to compute p best solutions. These solutions form a Master List. Each of these p initial solutions from the Master

List is used to create a fan of solution streams, one stream for each alternative. The solution obtained in the previous iteration is updated as the current solution to compute a neighborhood for the next iteration. The criterion for the selection for the best move can be varied at different iterations. This

procedure is repeated for a fixed number of iterations. At the end of all the iterations, all p streams

are traversed to obtain the best solution.

174 Figure 7.3: Sequential Fan Candidate List (Glover and Laguna 1997)

7.4.1 FanTabu Implementation

The FanTabu approach for the MMKP is implemented in the following steps:

The FanTabu approach is implemented in two phases. The first phase of the algorithm is the

FLTS. The FLTS is run for 5 iterations to obtain five solutions. These solutions comprise the Master

List for the FanTabu approach. Steps 1 through 5 comprise the FLTS. The second phase of this algorithm is implemented in the following steps.

Step 6: Master List - Declare five arrays ML1 through ML5. Populate each of these arrays with the solutions from the first phase. The number of iterations for the second phase is set depending on the problem size. Start with ML1.

Step 7: Find a Neighborhood - The iteration count is initialized to 1 at the beginning of this step.

Consider the Master List. The neighborhood of the solution is examined and only feasible moves are considered. If all the moves generated are feasible, go to Step 8. If all the moves are infeasible go to Step 5.

Step 8: Move Selection - The best move is selected in one of the three ways discussed below (these

175 are selected randomly) and this solution becomes the new solution used for generating the next neighborhood in the stream.

• Select the (class,item) combination from the neighborhood that yields the least objective

function value.

• Select the (class,item) combination from the neighborhood that yields the highest objective

function value.

• Generate a random interval of numbers and select the (class,item) combination from the

neighborhood that has an objective function value in this interval.

Populate the array Master List with the best move.

Step 9: Update Solution - Update the current solution with the best move solution. The tabu list is updated with the feasible best move at the end of each iteration. The items that have been selected in this move are not considered in the next neighborhood computation. These items have a tabu tenure of 1. Repeat Step 8 until the stopping criteria is met. The stopping criteria is the number of

iterations. If all the moves are infeasible go to Step 5.

Step 10: Compute Master Lists - Repeat Steps 7 and 8 for ML2 through ML5 as the Master List.

Step 11: Compute Solution - Traverse the solutions in the arrays ML1 through ML5 for the solution

with the highest objective function value. This is the best solution found for the MMKP. Stop the

algorithm.

This algorithm forces the TS to intensify and diversify the search process. The move selection

step sometimes makes the TS select the solution with the least improvement in the objective value.

This diversifies the search into a region that would not be considered in the basic TS. The selection

of the move based on the solution interval offers particular benefits as a part of the intensification

strategy. The strategy suggests the limits for bounding the changes considered. In case of the

algorithm yielding all infeasible moves in the neighborhood, a random item is selected from every

176 class as the current solution. This random solution could be either feasible or infeasible. This step creates a strategic oscillation between the feasible and the infeasible regions. The solutions obtained in each of the stream trajectories is saved in the Master List arrays. The solutions are saved so that they can be used for target analysis of the search space traversal. Thus, the FanTabu approach extends the basic TS to explore and exploit the search space while avoiding getting trapped in the local optima regions.

7.4.2 Empirical Tests for the FanTabu Implementation

Empirical tests for the FanTabu Implementation involved Khan’s available test problems, the 24 additional instances and the new MMKP Test Sets. The computational results compared with the legacy heuristics are summarized below.

Computational Results using Khan’s Test Problem Instances

FanTabu was tested using Khan’s 13 available test problem instances. The number of iterations was set to 100 for I01 to I09, and 500 for I10 to I13.

Table 7.9 compares the solution quality of the FanTabu to the legacy heuristics. The results in- dicate that the computational time for the FanTabu is highest since as the problem size increases the number of iterations used is also higher. Table 7.10 provides a summary of the performance of these heuristics. The results indicate that FanTabu performs better than the other heuristics considered.

The percentage relative error for each of these heuristics is indicated in Table 7.11. The percentage relative error for the FanTabu compares well with CPCCP and FLTS and is lowest overall.

FanTabu and the legacy heuristics were also compared using the additional 24 test problem instances generated via Khan’s approach. The number of iterations for FanTabu was set to 100.

The results are tabulated in Tables 7.12 and 7.13. The results are similar to the results obtained on

Khan’s 13 available test problem instances. The results in Table 7.12 indicate that FanTabu, FLTS, and CPCCP find solutions for all the test problem instances unlike MOSER and HEU. CPCCP is the

177 fastest amongst all the heuristics, while FanTabu requires the most time among all the heuristics.

Table 7.13 shows that FanTabu tends to be the best performer, rarely getting outperformed, and

finding a good percentage of the optimal solutions.

Computational Results using New MMKP Test Sets

The FanTabu and the legacy heuristics were tested and compared using the new test problem sets.

The number of iterations was set to 50 for the FanTabu. There were 9 files of 30 problems in each

file. These heuristics were tested on a total of 270 test problem instances. The results are tabulated in Tables 7.14 and 7.15.

Figures 7.4 and 7.5 graphically compare heuristic performance based on number of problems solved to optimal and number of times equal to best on the new MMKP test sets, respectively. The results indicate that the FanTabu solves a good number of problems to optimality and is a definite best performer. As the problem size increases, the number of problems solved to optimal by all the heuristics decreases, but the number of times FanTabu is the best remains significantly higher, which makes the FanTabu the preferred approach on such problems.

Table 7.15 summarizes average relative error achieved by each heuristic on each of the nine problem set files. Unlike the previous tests, FanTabu clearly outperforms the legacy MOSER and

CPCCP approaches as well as the basic TS approach, FLTS. More importantly, the FanTabu relative error increases at a slower rate as the problem size increases that do the relative error rates increase with the other solution approaches considered. Table 7.16 records the computational time in mil- liseconds for each of the heuristics. The computational time for the FanTabu is highest due to the higher number of iterations and parallel examination of each fan.

178 Table 7.9: Solution Quality for FanTabu on Khan’s 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CPCCP Time FLTS Time FanTabu Time I01 173 - - 154 62 159 47 158 31 169 235 I02 364 294 125 354 109 312 78 351 47 354 235 I03 1602 1127 297 1518 453 1407 172 1445 172 1557 1703 I04 3597 2906 625 3297 562 3322 234 3350 250 3473 2796 I05 3905.7 1068.3 1062 3894.5 1265 3889.9 234 3905.7 609 3905.7 6829 I06 4799.3 1999.5 1657 4788.2 1219 4723.1 360 4793 860 4799.3 10828

179 I07 24587* 20833 20594 - - 23237 1516 23547 10219 23691 70016 I08 36877* 31643 59907 34338 8453 35403 3266 35487 33781 35684 172765 I09 49167* - - - - 47154 5562 47107 71422 47202 328656 I10 61437* - - - - 58990 7547 59152 147312 58996 774985 I11 73773* - - - - 70685 9359 70868 246984 70813 2261328 I12 86071* - - - - 82754 12141 82716 558859 82684 3356234 I13 98429* - - - - 94465 22031 91551 1624031 94358 7040016 - Solution not found * LP Bound Time in ms Table 7.10: Performance Summary of the FanTabu and Other Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP FLTS FanTabu MOSER 0 0 - 1 0 0 7 180 HEU 0 0 7 - 4 2 0 CPCCP 2 0 13 9 - 4 2 FLTS 2 1 13 11 9 - FanTabu 7 2 13 12 11 9 - 13 total problems Table 7.11: Percentage Relative Error by FanTabu and Other Heuristics for the Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP FLTS FanTabu I01 - 10.98 8.09 8.67 2.31 I02 19.23 2.75 14.29 3.57 2.75 I03 29.65 5.24 12.17 9.80 2.81 I04 19.21 8.34 7.65 6.87 3.45 I05 72.65 0.29 0.40 0.00 0.00 I06 58.34 0.23 1.59 0.13 0.00 181 I07 15.27 - 5.49 4.38 3.64 I08 14.19 6.89 4.00 3.77 3.24 I09 - - 4.09 4.19 4.00 I10 - - 3.98 3.72 3.97 I11 - - 4.19 3.94 4.01 I12 - - 3.85 3.90 3.94 I13 - - 4.03 6.99 4.14 Total Average 32.65 4.96 5.68 4.61 2.94 - Relative Error cannot be calculated since no solution found Table 7.12: Solution Quality for FanTabu and Other Heuristics on 24 Additional Test Problems

Prob No Exact MOSER Time HEU Time CPCCP Time FLTS Time FanTabu Time 1 1832 1832 203 - - 1832 16 1698 109 1832 188 2 2147 - - 2027 219 2147 16 2027 109 2027 259 3 2020 1877 203 - - 1863 16 1919 125 2020 402 4 3993 3913 103 - - 3932 47 3908 157 3993 500 5 4326 4282 235 4241 579 4282 62 3916 203 4326 412 6 4259 4256 313 4044 704 4259 515 4259 265 4259 1156 7 6335 6275 750 6275 390 6275 46 6157 359 6335 1109 8 6606 6295 671 6420 797 6250 468 6606 688 6606 1687 9 6662 6536 703 6437 759 6495 125 6049 691 6662 2406 10 8360 8034 360 8267 766 8253 110 8274 500 8360 1407 11 8799 8625 531 8774 781 8673 109 8799 875 8799 2532 182 12 8748 8467 391 8642 969 8475 172 8748 1297 8746 2840 13 9746 9745 688 9655 875 9723 250 9717 828 9746 1750 14 10781 10647 406 10731 984 10276 219 10550 734 10774 2656 15 11011 9939 687 10868 891 10787 266 10318 937 10980 4062 16 12509 12509 62 12403 812 12509 578 12509 1313 12509 2750 17 12963 12756 516 12883 1047 12700 609 12921 1531 12958 4297 18 13310 13074 1547 13278 766 12971 640 13143 1859 13290 6609 19 14508 14431 890 - - 14298 234 14450 734 14508 4485 20 15438 - - 15427 1156 15051 672 15172 1453 15435 7125 21 15539 15439 1484 15513 1437 15454 781 15448 2281 15539 11734 22 16636 16476 750 16556 641 16439 234 16621 1047 16633 4532 23 17353 17228 1406 17276 875 16793 797 17009 2266 17348 7860 24 18010 - - 17907 1609 17737 719 17406 2937 17749 11422 - Solution not found Time in ms Table 7.13: Performance Summary of the FanTabu with Other Heuristics on 24 Additional Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP FLTS FanTabu MOSER 0 2 - 9 10 8 0 183 HEU 1 0 14 - 13 10 1 CPCCP 1 4 15 10 - 10 1 FLTS 1 5 15 13 12 - 1 FanTabu 16 14 22 22 20 18 - 24 total problems Table 7.14: Performance Summary of the FanTabu with Other Heuristics on New MMKP Test Sets

No. problems solved to optimal No. of times best Problem File MOSER HEU CPCCP FLTS FanTabu MOSER HEU CPCCP FLTS FanTabu MMKP01 12 2 11 16 25 1 1 0 0 9 MMKP02 7 2 8 11 20 1 0 0 0 14 MMKP03 10 3 8 11 14 0 2 0 3 15 184 MMKP04 5 0 4 5 24 0 0 0 0 19 MMKP05 5 0 3 7 15 1 0 0 1 20 MMKP06 5 0 3 6 6 0 0 2 5 18 MMKP07 1 0 2 4 15 0 0 3 1 18 MMKP08 1 0 1 1 8 0 0 0 1 27 MMKP09 0 0 0 0 2 1 0 0 2 27 30 problems in each file Table 7.15: Percentage Relative Error by FanTabu and Other Heuristics for the New MMKP Test Sets

Problem File MOSER HEU* CPCCP FLTS FanTabu MMKP01 3.61 4.5925 3.49 3.32 0.30 MMKP02 2.97 4.3120 3.73 3.48 0.45 MMKP03 2.71 2.5821 3.30 2.78 0.93 MMKP04 13.41 30.2829 8.77 6.65 0.69

185 MMKP05 4.42 6.3328 6.20 3.21 0.76 MMKP06 3.84 10.4429 4.94 3.14 1.86 MMKP07 10.1812 - 11.98 8.15 2.23 MMKP08 11.68 - 10.31 6.91 1.18 MMKP09 6.91 10.1629 8.57 8.06 2.76 Total Average 6.64 9.84 6.81 5.08 1.24 - Relative Error cannot be calculated since no solution found * Heuristic fails to find solutions for the superscripted number of problems Figure 7.4: Comparison of FanTabu based on Number of Problems Solved to Optimal on New MMKP Test Sets

Figure 7.5: Comparison of FanTabu based on Number of Times Equal to Best on New MMKP Test Sets

186 Table 7.16: Computational Time in milliseconds by FanTabu and Other Heuristics on the New MMKP Test Sets

Problem File MOSER HEU CPCCP FLTS FanTabu MMKP01 11.97 9.60 5.20 36.97 378.77 MMKP02 38.57 29.60 7.33 89.63 1004.67 MMKP03 230.67 123.00 15.10 384.87 4296.83 MMKP04 17.89 15.00 7.83 39.53 418.17 187 MMKP05 45.87 39.50 7.73 94.77 1074.53 MMKP06 332.90 109.00 17.73 388.60 3732.30 MMKP07 27.31 - 7.83 334.97 545.30 MMKP08 77.00 - 8.87 740.87 1311.97 MMKP09 575.53 140.00 24.53 24.53 5389.60 Total Average 150.86 66.53 11.35 237.19 2016.90 - Computational Time cannot be calculated since no solution found 7.5 CPCCP with Fan Candidate List (CCFT) for the MMKP

7.5.1 CCFT Implementation

The empirical results indicate that the performance of the CPCCP algorithm degrades as problem gets large. The CCFT approach extends CPCCP by adding the Fan Candidate List. The CCFT is implemented in two phases. The first phase finds an initial solution using the CPCCP procedure.

The second phase is the FanTabu using this CPCCP-generated initial solution.

7.5.2 Empirical Tests for the CCFT Candidate List Implementation

The CCFT implementation was tested on Khan’s 13 available test problem instances, the 24 addi- tional test problems generated via Khan’s approach, and the new MMKP test sets. The computa- tional results and the comparison with the legacy heuristics are summarized below.

Computational Results on Khan’s Test Problem Instances

The CCFT was tested using Khan’s 13 available test problem instances. The number of iterations was set to 100 for all the test problem instances. The solution quality and the computational time of CCFT and each of the legacy heuristics are summarized in Table 7.17. The results show that the

CCFT has a higher solution quality in comparison to the other heuristics and definitely improves on the CPCCP approach. The computational time for the CCFT is lower than that of the FanTabu.

Table 7.18 summarizes the performance of the solution approaches. CCFT reaches the optimal solution more often and provides better performance than other heuristics. The average relative percentage error for the CCFT in Table 7.19 is comparable to FanTabu although generally better than FanTabu on the larger problems (I07-I13).

The CCFT was compared to the performance of the legacy heuristics on the additional 24 test

problem instances. The number of iterations was set to 100. The results are tabulated in Tables 7.20

188 Table 7.17: Solution Quality for CCFT on Khan’s 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CCP Time FLTS Time FanTabu Time CCFT Time I01 173 - - 154 62 159 47 158 109 169 235 173 312 I02 364 294 125 354 109 312 78 351 125 354 235 352 563 I03 1602 1127 297 1518 453 1407 172 1445 562 1557 1703 1518 3094 I04 3597 2906 625 3297 562 3322 234 3350 735 3473 2796 3419 3047 I05 3905.7 1068.3 1062 3894.5 1265 3889.9 234 3905.7 1328 3905.7 6829 3905.7 6281 I06 4799.3 1999.5 1657 4788.2 1219 4723.1 360 4793.2 1859 4799.3 10828 4799.3 10485 189 I07 24587* 20833 20594 - - 23237 1516 23547 12875 23691 70016 23739 34594 I08 36877* 31643 59907 34338 8453 35403 3266 35487 37891 35684 172765 35698 78218 I09 49167* - - - - 47154 5562 47107 81547 47202 328656 47491 130313 I10 61437* - - - - 58990 7547 59108 127234 58964 570313 59549 220937 I11 73773* - - - - 70685 9359 70549 185266 70555 896969 71651 319813 I12 86071* - - - - 82754 12141 82114 273750 81833 1252360 83358 437937 I13 98429* - - - - 94465 22031 91551 408969 94168 1715438 94874 623984 - Solution not found * LP Bound Time in ms Table 7.18: Performance Summary of the CCFT with Other Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP FLTS FanTabu CCFT MOSER 0 0 - 1 0 0 0 0

190 HEU 0 0 7 - 4 2 0 1 CPCCP 0 0 13 9 - 4 4 0 FLTS 1 0 13 11 9 - 3 0 FanTabu 2 2 13 12 9 9 - 3 CCFT 3 8 13 11 13 12 8 - 13 total problems Table 7.19: Percentage Relative Error by CCFT and Other Heuristics on the Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP FLTS FanTabu CCFT I01 - 10.98 8.09 8.67 2.31 0.00 I02 19.23 2.75 14.29 3.57 2.75 3.30 I03 29.65 5.24 12.17 9.80 2.81 5.24 I04 19.21 8.34 7.65 6.87 3.45 4.95 I05 72.65 0.29 0.40 0.00 0.00 0.00 I06 58.34 0.23 1.59 0.13 0.00 0.00 191 I07 15.27 - 5.49 4.23 3.64 3.45 I08 14.19 6.89 4.00 3.77 3.24 3.20 I09 - - 4.09 4.19 4.00 3.41 I10 - - 3.98 3.79 4.03 3.07 I11 - - 4.19 4.37 4.36 2.88 I12 - - 3.85 4.60 4.92 3.15 I13 - - 4.03 6.99 4.33 3.61 Total Average 32.65 4.96 5.68 4.69 3.06 2.79 - Relative Error cannot be calculated since no solution found and 7.21. Both CCFT and FanTabu outperform the other heuristics. Both are comparable in terms of finding the optimal and returning the best. In fact, the results in Table 7.20 indicate even when there were differences, those differences were quite small. The advantage of CCFT is the lower computation time in the majority of problems.

Computational Results using New MMKP Test Sets

The computational results of CCFT on the new MMKP tests sets are summarized in Table 7.22.

The number of iterations was set to 50 for the CCFT. The results differ from the prior results.

While previously CCFT and FanTabu were comparable, such is not the case on these new problems.

Figures 7.6 and 7.7 graphically compare the heuristics based on number of problems solved to optimal and number of times equal to best on the new MMKP test sets, respectively.

The average percentage relative error for each of the heuristics on the new test sets are tabulated in Table 7.23. Although somewhat comparable, the FanTabu shows more consistent performance than CCFT. The computational time in milliseconds for the heuristics are tabulated in Table 7.24.

CCFT retains its computational performance advantage over FanTabu for these experiments. In general, FanTabu and CCFT are comparable on the limited range of problems currently in use, but on a more diverse set of problems such as in the new MMKP set, the FanTabu is the preferred approach.

7.6 Comparison of TS approaches with Reactive Local Search Approach (RLS)

7.6.1 RLS Approach

Hifi et al. (2006) recently developed a reactive local search approach for solving the MMKP, ex- tending their CPCCP approach. Although not a TS, RLS does represent the most aggressive local

192 Table 7.20: Solution Quality for CCFT on 24 Additional Test Problems

Prob No Exact MOSER Time HEU Time CPCCP Time FLTS Time FanTabu Time CCFT Time 1 1832 1832 203 - - 1832 16 1698 109 1832 188 1832 188 2 2147 - - 2027 219 2147 16 2027 109 2027 259 2147 265 3 2020 1877 203 - - 1863 16 1919 125 2020 402 2020 313 4 3993 3913 103 - - 3932 47 3908 157 3993 500 3993 282 5 4326 4282 235 4241 579 4282 62 3916 203 4326 412 4326 547 6 4259 4256 313 4044 704 4259 515 4259 265 4259 1156 4259 719 7 6335 6275 750 6275 390 6275 46 6157 359 6335 1109 6335 719 8 6606 6295 671 6420 797 6250 468 6606 688 6606 1687 6572 1093 9 6662 6536 703 6437 759 6495 125 6049 691 6662 2406 6662 1469 10 8360 8034 360 8267 766 8253 110 8274 500 8360 1407 8360 906 11 8799 8625 531 8774 781 8673 109 8799 875 8799 2532 8799 1593 193 12 8748 8467 391 8642 969 8475 172 8748 1297 8746 2840 8740 2079 13 9746 9745 688 9655 875 9723 250 9717 828 9746 1750 9746 1453 14 10781 10647 406 10731 984 10276 219 10550 734 10774 2656 10774 2015 15 11011 9939 687 10868 891 10787 266 10318 937 10980 4062 10966 2454 16 12509 12509 62 12403 812 12509 578 12509 1313 12509 2750 12509 1953 17 12963 12756 516 12883 1047 12700 609 12921 1531 12958 4297 12932 2875 18 13310 13074 1547 13278 766 12971 640 13143 1859 13290 6609 13292 4344 19 14508 14431 890 - - 14298 234 14450 734 14508 4485 14508 2531 20 15438 - - 15427 1156 15051 672 15172 1453 15435 7125 15431 3766 21 15539 15439 1484 15513 1437 15454 781 15448 2281 15539 11734 15539 6156 22 16636 16476 750 16556 641 16439 234 16621 1047 16633 4532 16636 2328 23 17353 17228 1406 17276 875 16793 797 17009 2266 17348 7860 17341 4125 24 18010 - - 17907 1609 17737 719 17406 2937 17749 11422 17824 5937 - Solution not found Time in ms Table 7.21: Performance Summary of the CCFT with Other Heuristics on 24 Additional Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP FLTS FanTabu CCFT MOSER 0 2 - 9 10 8 0 0

194 HEU 1 0 14 - 13 10 1 1 CPCCP 0 4 15 10 - 10 1 0 FLTS 1 5 15 13 12 - 1 2 FanTabu 4 14 22 22 20 18 - 6 CCFT 2 15 22 23 20 19 4 - 24 total problems Table 7.22: Performance Summary of the CCFT with Other Heuristics on New MMKP Test Sets

No. problems solved to optimal No. of times best Problem File MOSER HEU CPCCP FLTS FanTabu CCFT MOSER HEU CPCCP FLTS FanTabu CCFT MMKP01 12 2 11 16 29 25 0 0 0 0 3 0 MMKP02 7 2 8 11 24 23 0 0 0 0 5 4

195 MMKP03 10 3 8 11 16 15 0 0 0 1 8 5 MMKP04 5 0 4 5 24 21 0 0 0 0 8 3 MMKP05 5 0 3 7 18 19 0 0 0 0 6 8 MMKP06 5 0 3 6 11 11 0 0 0 0 14 5 MMKP07 1 0 2 4 21 16 0 0 0 1 12 3 MMKP08 1 0 1 1 8 9 0 0 0 1 16 5 MMKP09 0 0 0 0 3 1 0 0 0 0 18 8 30 problems in each file Table 7.23: Percentage Relative Error of CCFT and Other Heuristics for the New MMKP Test Sets

Problem File MOSER HEU* CPCCP FLTS FanTabu CCFT MMKP01 3.61 4.5929 3.49 3.32 0.02 0.42 MMKP02 2.97 4.3120 3.73 3.48 0.28 0.29 MMKP03 2.71 2.5821 3.30 2.78 0.53 0.70 MMKP04 13.41 30.2829 8.77 6.65 0.78 1.11

196 MMKP05 4.42 6.3328 6.20 3.21 0.59 0.56 MMKP06 3.84 10.4429 4.94 3.14 0.86 0.90 MMKP07 10.1812 - 11.98 8.15 0.95 1.95 MMKP08 11.68 - 10.31 6.91 0.94 2.02 MMKP09 6.91 10.1629 8.57 8.06 2.02 2.36 Total Average 6.64 9.84 6.81 5.08 0.77 1.15 - Relative Error cannot be calculated since no solution found * Heuristic fails to find solutions for the superscripted number of problems Figure 7.6: Comparison of CCFT based on Number of Problems Solved to Optimal on New MMKP Test Sets

Figure 7.7: Comparison of CCFT based on Number of Times Equal to Best on New MMKP Test Sets

197 Table 7.24: Computational Time in milliseconds by CCFT and Other Heuristics for the New MMKP Test Sets

Problem File MOSER HEU CPCCP FLTS FanTabu CCFT MMKP01 11.97 9.60 5.20 36.97 724.00 391.70 MMKP02 38.57 29.60 7.33 89.63 1884.90 994.20 MMKP03 230.67 123.00 15.10 384.87 8354.67 4681.73 MMKP04 17.89 15.00 7.83 39.53 769.23 505.77 198 MMKP05 45.87 39.50 7.73 94.77 2006.87 1396.83 MMKP06 332.90 109.00 17.73 388.60 8662.00 4893.30 MMKP07 27.31 - 7.83 334.97 1007.77 686.43 MMKP08 77.00 - 8.87 740.87 2519.13 1794.33 MMKP09 575.53 140.00 24.53 24.53 10064.13 7292.63 Total Average 150.86 66.53 11.35 237.19 3999.19 2515.21 - Computational Time cannot be calculated since no solution found search published to date for MMKP and thus must be compared to the current research. The ini- tial solution for this approach is the solution obtained by CPCCP. The RLS method uses a reactive mechanism during the search process which permits the algorithm to release the current solution and consider a better solution. The RLS approach uses two strategies; degrading strategy which is applied after improving the current solution by performing some swapping between items and deblocking strategy which allows some diversification and allows a change in the direction of the search to explore different regions of the search space. The RLS procedure is summarized below

(Hifi et al. 2006).

∗ Step 1: Set S ← S ← CPsol. This is the solution obtained from the constructive procedure (CP) from CPCCP. Set p = 0, where p is the number of times a solution can be degraded.

While not StoppingCondition() do

∗ S ← CCPsol, where CCPsol is the solution from CCP

If Z(S) < Z(S∗) then S ← S∗ and p = 0

If p < Const then S∗ ← Degrade(S) and p = p + 1

Else S∗ ← Deblock(S)

If Z(S∗) > Z(S) then S ← S∗ and p = 0

Else exit with best current solution

EndWhile

Return solution vector S with value Z(S)

The RLS algorithm starts with Step 1 which finds an initial feasible solution using CP. The stopping condition used was the number of iterations, set to 100, and a Const, set to 5. The main loop performs a local swapping search (CCP) to obtain the first improved solution. The best current solution is updated if the obtained solution has a better objective value compared to the initial solu- tion. When the local swap search in CCP is unable to improve the solution, the degrading strategy is used to consider another solution while denoting the current solution to degrade. This strategy aims

199 to change the search trajectory and is repeated for a fixed number of iterations. If the solution ob- tained appears stuck in a region of a local optima, the deblocking strategy is used to escape that local optima region. Finally, the local swap search is called and used until the StoppingCondition() is

met.

The DegradeStrategy is outlined as in Hifi et al. (2006):

Step 1: Set Ji ← GetClass(), where GetClass() selects an arbitrary class.

0 0 Step 2: Set j i ← Exchange(S, Ji, j), where the exchange between j and j i of the selected class yields a feasible solution.

Step 3: Repeat steps 1 and 2 for a certain number of times and exit with a new solution S.

Step 2 in the DegradeStrategy uses a simple exchange between the items of the same class.

Two items are exchanged if the new solution is a feasible solution.

The DeblockStrategy considers the exchange of items involving two selected classes. The steps involved in this strategy are explained below:

Step 1: Set ε = (i1, i2), where i1 and i2 are two different classes.

Step 2: Choose an item from each of the classes (i1, i2). If there exist items (j1, j2), where j1 ∈

ni1 and j2 ∈ ni2 , which produce a feasible solution and its objective function value shows an

improvement over the current solution, Z(S∗) > Z(S), then update this to be the current solution,

set S ← S∗ and Z(S) ← Z(S∗). Exit DeblockStrategy with S. Else, go to Step 3.

Step 3: Repeat step 2 until all the pairs of classes and items are considered, ε 6= ∅.

The performance of the FLTS and the extensions of the TS are compared with the RLS ap-

proach. The results are summarized in the following subsection.

7.6.2 Empirical Tests Comparing TS approaches with RLS Approach

The RLS implementation was tested on Khan’s 13 available test problem instances, the 24 additional

problems generated via Khan’s approach, and the new MMKP test sets. The computational results

200 and the comparison with the legacy heuristics and the TS extensions are summarized below.

Computational Results using Khan’s Test Problem Instances

Table 7.25 summarizes the solution quality and the computational time for the RLS and each of the heuristics. Table 7.26 gives a performance summary of all the heuristics for the 13 test problem instances. The results indicate that RLS runs longer than CCFT and is comparable to both FanTabu and CCFT in performance. Table 7.27 computes the average percentage relative error for all heuris- tics. The average percentage relative error of RLS is better than that of the legacy heuristics and

FLTS but is comparable to CCFT and FanTabu on the larger problems.

A performance comparison was done using the additional 24 test problem instances generated via Khan’s approach. The results are tabulated in Tables 7.28 and 7.29. On these problems RLS is dominated by each of the meta-heuristic approaches developed in this research.

Computational Results using New MMKP Test Sets

Table 7.30 summarizes the computational results using the new MMKP test sets. Table 7.31 tab-

ulates the average relative error achieved by each of the heuristics. Figure 7.10 plots the average relative error results. On these problems, RLS seems comparable to FLTS but is dominated by

FanTabu and CCFT. RLS does run the fastest (see Table 7.32) but its solution quality is not as good as the approaches developed in this research.

7.7 Summary

The knapsack problem and its variants are complex to solve and form a difficult class of optimiza- tion problems. Various greedy heuristic approaches have been developed for solving the KP and its variants. These greedy approaches find a good initial solution which can be used further as a starting solution for an improvement phase to find a final solution. Although these heuristics find

201 Table 7.25: Solution Quality Comparisons Among All Heuristics on Khan’s 13 Available Test Problems

Problem File Exact MOSER Time HEU Time CPCCP Time RLS Time FLTS Time FanTabu Time CCFT Time I01 173 - - 154 62 159 47 161 47 158 109 169 235 173 312 I02 364 294 125 354 109 312 78 354 125 351 125 354 235 352 563 I03 1602 1127 297 1518 453 1407 172 1496 484 1445 562 1557 1703 1518 3094 I04 3597 2906 625 3297 562 3322 234 3435 563 3350 735 3473 2796 3419 3047 I05 3905.7 1068.3 1062 3894.5 1265 3889.9 234 3847.3 1016 3905.7 1328 3905.7 6829 3905.7 6281

202 I06 4799.3 1999.5 1657 4788.2 1219 4723.1 360 4680.6 1766 4793.2 1859 4799.3 10828 4799.3 10485 I07 24587* 20833 20594 - - 23237 1516 23828 15812 23547 12875 23691 70016 23739 34594 I08 36877* 31643 59907 34338 8453 35403 3266 35685 59438 35487 37891 35684 172765 35698 78218 I09 49167* - - - - 47154 5562 47574 110328 47107 81547 47202 328656 47491 130313 I10 61437* - - - - 58990 7547 59361 207297 59108 127234 58964 570313 59549 220937 I11 73773* - - - - 70685 9359 71565 367656 70549 185266 70555 896969 71651 319813 I12 86071* - - - - 82754 12141 83314 577328 82114 273750 81833 1252360 83358 437937 I13 98429* - - - - 94465 22031 95076 889078 91551 408969 94168 1715438 94874 623984 - Solution not found * LP Bound Time in ms Table 7.26: Performance Summary of All Heuristics on Khan’s 13 Available Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP RLS FLTS FanTabu CCFT MOSER 0 0 - 1 0 0 0 0 0 HEU 0 0 7 - 4 3 2 0 1 203 CPCCP 0 0 13 9 - 2 4 4 0 RLS 3 0 13 9 11 - 11 7 5 FLTS 0 1 13 11 9 2 - 3 0 FanTabu 2 2 13 12 9 5 9 - 3 CCFT 5 3 13 11 13 8 12 8 - 13 total problems Table 7.27: Percentage Relative Error Achieved by All Heuristics on Khan’s 13 Available Test Problems

Problem File MOSER HEU CPCCP RLS FLTS FanTabu CCFT I01 - 10.98 8.09 6.94 8.67 2.31 0.00 I02 19.23 2.75 14.29 2.75 3.57 2.75 3.30 I03 29.65 5.24 12.17 6.62 9.80 2.81 5.24 I04 19.21 8.34 7.65 4.50 6.87 3.45 4.95 I05 72.65 0.29 0.40 1.50 0.00 0.00 0.00 I06 58.34 0.23 1.59 2.47 0.13 0.00 0.00 204 I07 15.27 - 5.49 3.09 4.23 3.64 3.45 I08 14.19 6.89 4.00 3.23 3.77 3.24 3.20 I09 - - 4.09 3.24 4.19 4.00 3.41 I10 - - 3.98 3.38 3.79 4.03 3.07 I11 - - 4.19 2.99 4.37 4.36 2.88 I12 - - 3.85 3.20 4.60 4.92 3.15 I13 - - 4.03 3.41 6.99 4.33 3.61 Total Average 32.65 4.96 5.68 3.64 4.69 3.06 2.79 - Relative Error cannot be calculated since no solution found Table 7.28: Solution Quality of All Heuristics on 24 Additional Test Problems

Prob No Exact MOSER Time HEU Time CPCCP Time RLS Time FLTS Time FanTabu Time CCFT Time 1 1832 1832 203 - - 1832 16 1832 0 1698 109 1832 188 1832 188 2 2147 - - 2027 219 2147 16 2147 78 2027 109 2027 259 2147 265 3 2020 1877 203 - - 1863 16 1900 62 1919 125 2020 402 2020 313 4 3993 3913 103 - - 3932 47 3932 62 3908 157 3993 500 3993 282 5 4326 4282 235 4241 579 4282 62 4289 110 3916 203 4326 412 4326 547 6 4259 4256 313 4044 704 4259 515 4256 281 4259 265 4259 1156 4259 719 7 6335 6275 750 6275 390 6275 46 6156 156 6157 359 6335 1109 6335 719 8 6606 6295 671 6420 797 6250 468 6412 234 6606 688 6606 1687 6572 1093 9 6662 6536 703 6437 759 6495 125 6449 578 6049 691 6662 2406 6662 1469 10 8360 8034 360 8267 766 8253 110 8164 125 8274 500 8360 1407 8360 906 11 8799 8625 531 8774 781 8673 109 8704 328 8799 875 8799 2532 8799 1593 205 12 8748 8467 391 8642 969 8475 172 8378 609 8748 1297 8746 2840 8740 2079 13 9746 9745 688 9655 875 9723 250 9646 250 9717 828 9746 1750 9746 1453 14 10781 10647 406 10731 984 10276 219 10398 422 10550 734 10774 2656 10774 2015 15 11011 9939 687 10868 891 10787 266 10380 875 10318 937 10980 4062 10966 2454 16 12509 12509 62 12403 812 12509 578 12194 1016 12509 1313 12509 2750 12509 1953 17 12963 12756 516 12883 1047 12700 609 12372 2047 12921 1531 12958 4297 12932 2875 18 13310 13074 1547 13278 766 12971 640 12547 2547 13143 1859 13290 6609 13292 4344 19 14508 14431 890 - - 14298 234 14039 1468 14450 734 14508 4485 14508 2531 20 15438 - - 15427 1156 15051 672 14631 4593 15172 1453 15435 7125 15431 3766 21 15539 15439 1484 15513 1437 15454 781 15265 6031 15448 2281 15539 11734 15539 6156 22 16636 16476 750 16556 641 16439 234 15907 812 16621 1047 16633 4532 16636 2328 23 17353 17228 1406 17276 875 16793 797 16156 875 17009 2266 17348 7860 17341 4125 24 18010 - - 17907 1609 17737 719 16886 6141 17406 2937 17749 11422 17824 5937 - Solution not found Time in ms Table 7.29: Performance Summary of All Heuristics on 24 Additional Test Problems

Times Times Number of times Heuristic better than other approaches Heuristic Best Optimal MOSER HEU CPCCP RLS FLTS FanTabu CCFT MOSER 0 2 - 9 10 12 8 0 0 HEU 1 0 14 - 13 16 10 1 1 206 CPCCP 0 4 15 10 - 16 10 1 0 RLS 0 2 10 8 5 - 6 1 0 FLTS 1 5 15 13 12 18 - 1 2 FanTabu 4 14 22 22 20 22 18 - 6 CCFT 2 15 22 23 20 22 19 4 - 24 total problems Table 7.30: Performance Summary of All Heuristics on New MMKP Test Sets

No. problems solved to optimal No. of times best Problem File MOSER HEU CPCCP RLS FLTS FanTabu CCFT MOSER HEU CPCCP RLS FLTS FanTabu CCFT MMKP01 12 2 11 12 16 29 25 0 0 0 0 0 3 0 MMKP02 7 2 8 0 11 24 23 0 0 0 0 0 5 4

207 MMKP03 10 3 8 1 11 16 15 0 0 0 0 0 8 5 MMKP04 5 0 4 6 5 24 21 0 0 0 2 0 6 2 MMKP05 5 0 3 5 7 18 19 0 0 0 1 0 6 7 MMKP06 5 0 3 0 6 11 11 0 0 0 1 0 13 5 MMKP07 1 0 2 4 4 21 16 0 0 0 1 0 11 3 MMKP08 1 0 1 2 1 8 9 0 0 0 2 0 13 5 MMKP09 0 0 0 0 0 3 1 0 0 0 1 0 18 7 30 total problems Table 7.31: Percentage Relative Error Achieved by All Heuristics on the New MMKP Test Sets

Problem File MOSER HEU* CPCCP RLS FLTS FanTabu CCFT MMKP01 3.61 4.5925 3.49 1.77 3.32 0.02 0.42 MMKP02 2.97 4.3120 3.73 2.73 3.48 0.28 0.29 MMKP03 2.71 2.5821 3.30 2.88 2.78 0.53 0.70 MMKP04 13.41 30.2829 8.77 3.40 6.65 0.78 1.11

208 MMKP05 4.42 6.3328 6.20 3.37 3.21 0.59 0.56 MMKP06 3.84 10.4429 4.94 2.89 3.14 0.86 0.90 MMKP07 10.1812 - 11.98 6.01 8.15 0.95 1.95 MMKP08 11.68 - 10.31 4.62 6.91 0.94 2.02 MMKP09 6.91 10.1629 8.57 4.43 8.06 2.02 2.36 Total Average 6.64 9.84 6.81 3.57 5.08 0.77 1.15 - Relative Error cannot be calculated since no solution found * Heuristic fails to find solutions for the superscripted number of problems Figure 7.8: Comparison of All Heuristics based on Number of Problems Solved to Optimal on New MMKP Test Sets

Figure 7.9: Comparison of All Heuristics based on Number of Times Equal to Best on New MMKP Test Sets

209 Table 7.32: Computational Time in milliseconds of All Heuristics on the New MMKP Test Sets

Problem File MOSER HEU CPCCP RLS FLTS FanTabu CCFT MMKP01 11.97 9.60 5.20 42.77 36.97 724.00 391.70 MMKP02 38.57 29.60 7.33 248.97 89.63 1884.90 994.20 MMKP03 230.67 123.00 15.10 1958.37 384.87 8354.67 4681.73

210 MMKP04 17.89 15.00 7.83 56.77 39.53 769.23 505.77 MMKP05 45.87 39.50 7.73 184.37 94.77 2006.87 1396.83 MMKP06 332.90 109.00 17.73 1737.97 388.60 8662.00 4893.30 MMKP07 27.31 - 7.83 64.00 334.97 1007.77 686.43 MMKP08 77.00 - 8.87 273.87 740.87 2519.13 1794.33 MMKP09 575.53 140.00 24.53 2081.33 24.53 10064.13 7292.63 Total Average 150.86 66.53 11.35 738.71 237.19 3999.19 2515.21 Figure 7.10: Comparison of All Heuristics based on Percentage Relative Error on New MMKP Test Sets

a competent final solution, sometimes the solution gets caught in a local optima. In such cases, metaheuristic approaches play a vital role in finding better solutions than those obtained by greedy approaches. This research creates and tests several metaheuristic approaches.

This chapter discussed the TS metaheuristic approach for solving the MMKP. The chapter discussed the implementation and the computational tests on a basic TS and two extensions of the

TS. Three different tabu search approaches were developed and examined: FLTS, FanTabu, and

CCFT. These approaches were discussed and comparative results were presented. All approaches were examined using available test problem instances and new test sets for the MMKP. In addition, these approaches were compared to the most recent local search heuristic published for the MMKP.

On available problem sets, the developed approaches do better than legacy heuristics and the new

RLS method. However, on the more diverse set of new MMKP test instances, the metaheuristic approaches developed in this research outshine any approach yet published. Tables 7.33 and 7.34 provide a final summary of how well each heuristic does on the three problem sets using percentage of optimum as the measure.

211 Table 7.33: Comparison of All heuristics based on Percentage of Optimum on the Khan’s MMKP Test Problems

Heuristic Khan’s 13 Available Test Problems 24 Additional Test Problems MOSER 67.35 97.90 HEU 95.04 98.58 CPCCP 94.32 98.04 RLS 96.36 96.84 FLTS 95.40 97.46 FanTabu 96.94 99.68 CCFT 97.21 99.89

212 Table 7.34: Comparison of All Heuristics based on Percentage of Optimum on the New MMKP Test Sets

Problem File MOSER HEU CPCCP RLS FLTS FanTabu CCFT MMKP01 96.39 95.41 96.51 98.23 96.68 99.98 99.58 MMKP02 97.03 95.69 96.27 97.27 96.52 99.72 99.71 MMKP03 97.29 97.42 96.70 97.12 97.22 99.47 99.30 MMKP04 92.78 69.72 91.23 96.60 93.35 99.22 98.89 213 MMKP05 95.58 93.67 93.80 96.63 96.79 99.41 99.44 MMKP06 96.16 89.36 95.06 97.11 96.86 99.14 99.10 MMKP07 89.82 - 88.02 93.99 91.85 99.05 98.05 MMKP08 91.36 - 89.69 95.38 93.09 99.06 97.98 MMKP09 93.09 89.84 91.43 95.57 91.94 98.02 97.64 Average 94.39 90.16 93.19 96.43 94.92 99.23 98.85 - Percentage Optimum cannot be calculated since no solution found 8. Summary, Contributions, and Future Avenues

8.1 Summary and Contributions

Heuristics are often developed and tested on available test problem instances without understanding either the problems or the performance behavior based on problem characteristics. The research considered MDKP, MKP, MCKP, and MMKP variants of the KP. It outlined the legacy heuristic approaches and the test problem generation scheme for all the variants. The research focus was on the MMKP. This study involved empirical analyses of heuristics for the MMKP to gain a deeper understanding of the heuristic performance based on problem characteristics. The existing stan- dard MMKP test problems have an insufficient range of problem characteristics and are too limited.

Hence, the current research developed a diverse and an adequate number of new MMKP test prob- lems covering an entire range of problem correlation structure and varying constraint right-hand side levels. The legacy MMKP heuristics were empirically tested on the new test set and ana- lyzed as a function of problem characteristics. The results threw insights on the varying solution quality affected by problem characteristics. The research further developed three greedy heuristic approaches - a TYPE heuristic based on the insights gained from empirical legacy heuristic analysis and the CH1 and CH2 greedy heuristic approaches. Metaheuristic solution procedures based on tabu search (TS) were also developed and studied. These heuristics showed improved performance on existing MMKP test problems and the new MMKP test set. The contributions of this research

214 are summarized below:

8.1.1 Legacy Heuristics and Test Problem Analysis

The research studied the legacy heuristic approaches for the KP variants and the test problem gen- eration scheme for the available KP variant test problem instances. This included generating the test problem instances using published approaches for the KP variants and analyzing their problem structure. Such analyses has never been done. The results draw attention to the lack of variation in the test problem structure and evidence that the existing test problem generation schemes are inade- quate. The research questions the generality of past solution results to actual problem instances. The narrow ranges of the test problem characteristics lead to incomplete conclusions regarding heuristic performance. Thus, existing test problem instances fail to allow insights regarding the heuristic per- formance based on problem characteristics. The new MMKP test set provides sufficient diversity and a larger pool of test problem instances.

8.1.2 Insights on Heuristic Performance Based on Problem Structure

Insights regarding the heuristic performance was gained using structured empirical testing of the legacy MMKP heuristics using a diverse analytical MMKP test set generated from the Cho (2005) competitive MDKP test sets. The analytical test sets were generated by varying the number of classes as 5, 10, 25, the number of knapsack constraints as 5, 10, 25, and fixing the number of items as 10. The research analyzed heuristic performance based on problem characteristics and uncovered a best performer based on problem type. The knowledge gained by conducting an empirical analysis of various heuristic methods on the analytical test problems was used to develop a new TYPE-based heuristic for the MMKP providing improved performance.

215 8.1.3 Empirical Science Leading to Theory

Computational testing is necessary to understand the influence of problem characteristics on heuris- tic performance. This research conducted computational testing to gain insights on how different heuristics perform based on problem characteristics. These insights provided knowledge used to design new robust greedy heuristics whose performance exceeded that of the legacy heuristics.

8.1.4 New Test Set Development

Existing MMKP test problems are inadequate in number, lack diversity, and do not provide a full range of problem information. This research devised an alternative problem generation scheme by transforming Cho’s (2005) competitive MDKP test sets to the new MMKP test sets. The new test sets are adequate in number and diverse in problem structure. The correlation range covers an entire range while varying the constraint right-hand side level within each constraint set, resulting in a test set with diverse problem characteristics. When testing legacy heuristics and the new heuristics on these new MMKP test sets, results demonstrated the performance deficiencies due to existing test problems and the diversity of the new test set. Such improvement in test problem availability will improve the overall quality of research on MMKP algorithms.

8.1.5 New Greedy Heuristics Development

Three new greedy heuristics were developed for solving the MMKP. The first heuristic was the

TYPE-based heuristic which pre-processes a problem and solves it using that problem-specific knowledge to obtain computational efficiencies. Pre-processing of a problem determines the prob- lem slackness levels and the size of the problem and facilitates using the heuristic which is likely the best-performer for those problem structure levels. No such approach had been used for the MMKP.

The second heuristic (CH1) relaxed the MMKP to a MDKP, and a greedy heuristic was employed to solve this modified MDKP to obtain a good initial solution. The initial solution was improved using

216 a local improvement phase with a single swap involving a single class. The third heuristic (CH2) extended the second heuristic by employing a more aggressive local improvement phase enabling double swap involving two classes simultaneously. Both CH1 and CH2 improved performance over the suite of legacy heuristics considered.

8.1.6 Metaheuristic Solution Procedure Development

This research presented initial metaheuristic implementations for the MMKP. Initially, a basic TS approach (FLTS) for the MMKP was designed. FLTS was extended to a Sequential Fan Candidate

List approach (FanTabu) and then the CPCCP Fan Candidate List (CCFT) (Hifi et al. 2004). These extended approaches are based on the Elite Candidate List approach (Glover and Laguna 1997) which intensifies and diversifies the search process by parallel processing. These metaheuristic approaches perform better than the legacy heuristics on the available test problems and outperform all published heuristics on the new MMKP test sets. This work presented the first TS application for the MMKP and yielded the best solution method to date for MMKP.

8.2 Future Avenues

This research focused on the solution performance of greedy heuristics on the MMKP. It also con- sidered initial metaheuristic approaches focusing on TS and its extensions. There are several other areas that could be examined in future research:

First, this research study restricted the TS move such that a single item is swapped each iteration to give a new feasible solution for the next iteration. Future work can examine more complex moves. The current study uses short-term TS memory. Various long-term TS memory features can be examined. Other avenues include varying the tabu tenure, incorporating intensification and diversification approaches to yield improved solutions, considering alternate aspiration criteria, and considering neighborhood strategies to reduce the neighborhood evaluation effort. Other avenues

217 could be to devise a greedy heuristic o find an initial feasible solution for the TS.

Second, in this study a competitive MDKP test set which was designed systematically by

Cho (2005) was transformed to obtain new MMKP test sets. Future work could consider a more systematic experimental designs and problem generation methods for the MMKP type of problems.

One could generate test problems using different problem generation techniques and compare the complexity of the problems based on the heuristic performance. Future research should continue to explicitly consider problem structure issues associated with test sets.

Third, the greedy heuristic approaches produce different solution quality based on the prob- lem characteristics. The solution quality of metaheuristic approaches may also be affected by the problem characteristics. One research area could be to study the performance variation of the meta- heuristic solution quality based on problem characteristics. Future research can consider developing other metaheuristic approaches such as a genetic algorithm, an ant colony optimization algorithm, or a simulated annealing approach for the MMKP and identify which is the best metaheuristic ap- proach for solving the MMKP. Other avenues could be to develop a hyperheuristic for the MMKP which chooses between several low-level heuristics developed for the MMKP based on the dynamic non-problem specific information such as objective function value and CPU time. Further research can model the real-world application to the MMKP type of problem and use several metaheuristic approaches to solve the actual problem.

Finally, heuristics are developed to solve various combinatorial optimization problems like other KP variants, generalized assignment problems, set covering problems, and bin packing prob- lems. Future research could conduct a systematic empirical study of general heuristic performance which can lead to improved heuristics yielding robust and consistent solution quality. Further re- search could also examine benchmark test problems for other problem types. If the existing test problems are inadequate, future research could extend to generate a full range of problems and gain insights regarding heuristic performance applied to these other types of problems.

218 A. Additional Results from Empirical Tests

Table A.1: New MMKP Test Problem Sets

Problem File classes items knapsacks MMKP01 5 10 5 MMKP02 10 10 5 MMKP03 25 10 5 MMKP04 5 10 10 MMKP05 10 10 10 MMKP06 25 10 10 MMKP07 5 10 25 MMKP08 10 10 25 MMKP09 25 10 25 30 problems in each file

219 Table A.2: Exact Solutions for the New MMKP Test Problem Sets

Prob No MMKP01 MMKP02 MMKP03 MMKP04 MMKP05 MMKP06 MMKP07 MMKP08 MMKP09 1 401 876 2221 413 888 2194 339 778 1986 2 463 881 2063 418 885 2302 389 864 2259 3 487 924 2228 438 915 2239 373 731 1897 4 446 901 2259 416 853 2139 363 748 2195 5 430 896 2325 393 905 2285 396 845 2003 6 459 796 2285 386 912 2275 430 897 2057 220 7 437 826 2290 404 839 2131 373 827 2118 8 386 816 1923 444 840 2214 393 911 1999 9 466 952 2152 460 917 1992 324 867 2018 10 415 903 2238 422 916 2158 364 894 2179 11 461 908 2348 403 887 2096 424 772 2136 12 456 897 2172 386 797 2257 407 774 2231 13 474 925 2213 394 876 2078 387 800 2280 14 423 951 1979 344 882 2050 429 777 2244 15 438 850 2290 463 900 2247 366 880 2006 Table A.3: Exact Solutions for the New MMKP Test Problem Sets (Continued)

Prob No MMKP01 MMKP02 MMKP03 MMKP04 MMKP05 MMKP06 MMKP07 MMKP08 MMKP09 16 442 900 2215 463 817 2306 429 807 2125 17 443 839 2279 418 917 2333 322 738 2033 18 394 910 2281 433 879 2124 430 843 2183 19 425 894 2280 393 926 2255 344 766 2216 20 471 859 2200 373 889 2252 346 755 2071 21 377 951 2245 436 854 2245 388 932 2175 221 22 461 877 2278 457 907 2268 395 857 2101 23 407 912 2289 417 831 2343 443 736 2121 24 415 833 2228 458 869 2282 385 890 1902 25 411 817 2325 419 812 2121 446 800 2289 26 448 931 1968 434 870 2222 331 843 2064 27 485 877 2231 475 853 2268 344 901 1950 28 477 901 2331 458 898 2009 359 805 2053 29 432 914 2163 427 873 2058 289 819 1960 30 443 839 2177 385 829 2268 396 862 2177 Figure A.1: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the generated MMKP Test Problem with 10 classes, 10 items, 5 knapsacks

Figure A.2: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the generated MMKP Test Problem with 25 classes, 10 items, 5 knapsacks

222 Figure A.3: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the generated MMKP Test Problem with 5 classes, 10 items, 10 knapsacks

Figure A.4: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the generated MMKP Test Problem with 10 classes, 10 items, 10 knapsacks

223 Figure A.5: Range of Correlation Values Between Objective Function and Constraint Co- efficients for the generated MMKP Test Problem with 25 classes, 10 items, 10 knapsacks

224 B. Details on Cho Generation Approach

Cho’s Competitive MDKP Test Problem Generation Approach

Cho (2005) competitive MDKP test sets have nine files containing 30 problems each, with a total of 270 test problem instances. The problem sets were generated varying the number of items as 50, 100, and 250 and number of knapsacks as 5, 10, and 25. The interconstraint correlation ρCAi

between objective function coefficient and the ith constraint coefficient were randomly generated

from a Uniform Distribution in the interval (−0.9, 0.9). The interconstraint correlation ρAiAj be-

tween the ith and jth constraint correlation, is set as a midpoint of the range defined by ρCAi and

ρCAj using the Equation (B.1). This maintains the correlation matrix < for the test problem coeffi- cients as a positive semidefinite matrix. The correlation matrix < is generated by using Procedure

CorrGeneration and Procedure Iman and Conover (Iman and Conover 1982) as referenced in Cho

(2005). Procedure CorrGeneration and Procedure Iman and Conover (Iman and Conover 1982) for

5 knapsack MMKP are discussed below.

ρAiAj = ρCAi · ρCAj (B.1)

Procedure CorrGeneration

Step 1: Specify the desired correlation matrix < in terms of ρCAk , k = 1, ..., 5

Step 2: Calculate ρAiAj using Equation (B.1).

Step 3: Specify the correlation matrix < which is a 6 × 6 matrix for 5 knapsacks as follows.:

225   1 ρCA1 ρCA2 ρCA3 ρCA4 ρCA5 ρCA1 1 ρA1A2 ρA1A3 ρA1A4 ρA1A5    ρCA2 ρA1A2 1 ρA2A3 ρA2A4 ρA2A5  < =   ρ 3 ρ 1 3 ρ 2 3 1 ρ 3 4 ρ 3 5   CA A A A A A A A A  ρCA4 ρA1A4 ρA2A4 ρA3A4 1 ρA4A5  ρCA5 ρA1A5 ρA2A5 ρA3A5 ρA4A5 1

Step 4: Call Procedure Iman and Conover to generate the problem set using the correlation matrix

<.

Procedure Iman and Conover

Iman and Conover (1982) discuss a method to induce a Spearman Correlation structure given by < on a mulyivariate input random variable set. This approach ensures the final correlation struc- ture of input vector with Spearman Correlation is close to the desired correlation matrix < while preserving the marginal distribution. Following steps are involved in the Iman and Conover ap- proach for generating the correlation matrix.

Step 1: Perform Cholesky factorization of the desired correlation matrix < to get a lower matrix P as < = P · P T .

Step 2: Generate 6 random number vectors used to randomize the Van der Waerden SCores (VW scores) using Equation (B.2).

V W Scores = Φ−1[i/(N + 1)] (B.2)

where i is order, N = 100, Φ−1 is the inverse CDF of the standard normal distribution.

Step 3: Generate a VW matrix H using the VW scores from Step 2.

Step 4: Use the VW matrix H to generate a sample correlation matrix T .

Step 5: Perfom Cholskey factorization of matrix T to obtain a lower matrix Q by T = Q · QT .

Step 6: Generate a transformation matrix S as S = P · Q−1.

Step 7: Generate a new matrix H∗ = H · ST .

226 Table B.1: Problem Set Parameters for Cho (2005) Competitive MDKP Test Problem Sets

Parameters Values Problems 270 problems (30 problem in 9 files) Decision Variables 50, 100 , 250 Constraints 5, 10, 25

Slackness Set ai ∼ U(0.2, 0.8) (vary for each constraint)

Correlation Define αi ∼ U(−0.9, 0.9) (set to ensure positive semi-definiteness)

Coefficient Distribution Objective function coefficient cj ∼ DU(1, 100)

Constraint coefficient aij ∼ DU(1, ri), ri ∼ DU(40, 90)

Step 8: Extract each the column vector, compute the ranks, and generate a rank matrix M.

Step 9: Generate 6 random vectors with a marginal distribution of input variables for the objective function and the knapsack constraints.

Step 10: Sort the random numbers in a descending order.

Step 11: Rearrange the random numbers in Step 10 in the rank order in matrix M.

The coefficients of the objective function and the constraint coefficients are generated by using different distributions. The objective function coefficients pij are generated from a Discrete Uni- form Distribution interval (1, 100) and the constraint coefficients wij are generated from a Discrete

Uniform Distribution interval (1, ri) where ri is picked from a Discrete Uniform Distribution in- terval (40, 90). < and ri are used to generate the objective function coefficients and the constraint coefficients. The coefficients are generated as the coefficients for the MDKP. Table B.1 summarizes

Cho (2005) competitive MDKP test problem sets.

227 Bibliography

Aboudi, R. and K. Jornsten (1994). Tabu search for general zero-one integer programs using the

pivot and complement heuristic. ORSA Journal on Computing 6(1), 82–93.

Ahuja, A., O. Ergun, J. Orlin, and A. Punnen (2002). A survey of very large-scale neighborhood

search techniques. Discrete Applied 123(1-3), 1–3.

Akbar, M. M., O. Ergun, and A. P. Punnen (2001). Heuristic solutions for the multiple-choice

multi-dimensional knapsack problem. In International Conference on Computer Science,

San Francisco, USA.

Akbar, M. M., M. S. Rahman, M. Kaykobad, E. G. Manning, and G. C. Shoja (2006). Solv-

ing the multidimensional multiple-choice knapsack problem by constructing convex hulls.

Computers and Operations Research 33(5), 1259–1273.

Armstrong, R. D., D. S. Kung, P. Sinha, and A. A. Zoltners (1983). A computational study of

a multiple-choice knapsack algorithm. ACM Transactions on Mathematical Software 9(2),

184–198.

Armstrong, R. D., P. Sinha, and A. A. Zoltners (1982). The multiple-choice nested knapsack

model. Management Science 28(1), 34–43.

Balas, E. and C. H. Martin (1980). Pivot and complement - a heuristic for 0-1 programming.

Management Science 26(1), 86–96.

Balas, E. and E. Zemel (1980). An algorithm for large zero-one knapsack problems. Operations

Research 28(5), 1130–1154.

228 Basnet, C. and J. Wilson (2005). Heuristics for determining the number of warehouses for storing

non-compatible products. International Transactions in Operational Research 12(5), 527–

538.

Beasley, J. E. (2006). Problems available at the website. http://people.brunel.ac.uk/ mas-

tjjb/jeb/info.html.

Bertsimas, D. and R. Demir (2002). An approximate dynamic programming approach to multi-

dimensional knapsack problems. Management Science 48(4), 550–565.

Blum, C. and A. Roli (2003). Metaheuristics in combinatorial optimization: Overview and con-

ceptual comparison. ACM Computing Surveys 35(3), 268–308.

Cabot, V. A. (1970). An enumeration algorithm for knapsack problems. Operations Re-

search 18(2), 306–311.

Chen, L., S. Khan, K. F. Li, and E. G. Manning (1999). Building an adaptive multimedia system

using the utility model. Parallel and Distributed Processing, Lecture Notes in Computer

Science 1586, Springer Verlag (International Workshop on Parallel and Distributed Realtime

Systems (WPDRTS 7), ACM/IEEE/US Navy Surface Warfare Center), San Juan PR, April

1999), 289–298.

Cho, Y. K. (2002). Empirical analysis of several different heuristic techniques in multi-

dimensional knapsack poblems. AFIT/GOR/ENS/02M-04, Air Force Institute of Technol-

ogy, Wright-Patterson Air Force Base, Ohio.

Cho, Y. K. (2005). Developing New Multidimensional Knapsack Heuristics Based on Empiri-

cal Analysis of Legacy Heuristics. AFIT/DS/ENS/05-01, Air Force Institute of Technology,

Wright-Patterson Air Force Base, Ohio.

Cho, Y. K., J. T. Moore, and R. R. Hill (2003a). Developing a new algorithm for the multidimen-

sional knapsack problem using lagrange multipliers and the core problem. In Proceedings of

the 8th International Journal of Industrial Engineering Conference.

Cho, Y. K., J. T. Moore, and R. R. Hill (2003b). Developing a new greedy heuristic based on

229 knowledge gained via structured empirical testing. International Journal of Industrial Engi-

neering 10(4), 504–510.

Cho, Y. K., J. T. Moore, R. R. Hill, and C. H. Reilly (2006). Exploiting empirical knowledge

for bi-dimensional knapsack problem heuristics. Submitted for review in Computers and

Industrial Engineering.

Chocolaad, C. A. (1998). Solving geometric knapsack problems using tabu search.

AFIT/GOR/ENS/98M-05, Air Force Institute of Technology, Wright-Patterson Air Force

Base, Ohio.

Chu, P. C. and J. E. Beasley (1998). A genetic algorithm for the multiconstraint knapsack prob-

lem. Journal of Heuristics 4(1), 63–86.

Dawande, M., J. Kalagnanam, P. Keskinocak, R. Ravi, and F. S. Salman (2000). Approxima-

tion algorithms for the multiple knapsack problem with assignment restrictions. Journal of

Combinatorial Optimization 4(2), 171–186.

Drexl, A. D. (1988). A simulated annealing approach to the multiconstraint zero-one knapsack

problem. Computing 40(1), 1–8.

Dyer, M. E., N. Kayal, and J. Walker (1984). A branch and bound algorithm for solving

the multiple-choice knapsck problem. Journal of Computational and Applied Mathemat-

ics 11(2), 231–249.

Eglese, R. W. (1990). Simulated annealing: A tool for operational research. European Journal of

Operational Research 46(3), 271–281.

Feng, Y. (2001). Resource allocation in Ka-band satellite systems. Master’s thesis, University of

Maryland. Technical Report for the Institute for Systems Research.

Fisher, M. L. (1981). The lagrangian relaxation method for solving integer programming prob-

lems. Management Science 27(1), 1–18.

Freville,´ A. and G. Plateau (1986). Heuristic and reduction methods for multiple constraints 0-1

linear programming problems. European Journal of Operational Research 24(2), 206–215.

230 Frieze, A. and M. Clarke (1984). Approximation algorithms for the m-dimensional 0-1 knap-

sack problem: Worst-case and probabilistic analyses. European Journal of Operational Re-

search 15(1), 100–109.

Gavish, B. and H. Pirkul (1985). Efficient algorithms for solving multiconstraint zero-one knap-

sack problems to optimality. Mathematical Programming 31(1), 78–105.

Gendreau, M. and J. Y. Potvin (2005). Metaheuristics in combinatorial optimization. Annals of

Operations Research 140(1), 189–213.

Geoffrion, A. M. (1967). Integer programming by implicit enumeration and balas’ method. SIAM

Review 9(2), 178–190.

Glover, F. (1968). Surrogate constraints. Operations Research 16(4), 741–749.

Glover, F. (1977). Heuristics for integer programming using surrogate constraints. Decision Sci-

ences 8(1), 156–166.

Glover, F. and H. Greenberg (1986). Future paths for integer programming and links to artificial

intelligence. Computers and Operations Research 13(5), 533–549.

Glover, F. and G. A. Kochenberger (2002). Handbook of Metaheuristics. Norwell, Mas-

sachusetts: Kluwer Academic Publishers. 37-82.

Glover, F. and M. Laguna (1997). Tabu Search. Kluwer Academic Publishers.

Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning.

Addison-Wesley.

Hanafi, S. and A. Freville (1998). An efficient tabu search for 0-1 multidimensional knapsack

problem. European Journal of Operational Research 106(2), 659–675.

Henderson, D., S. H. Jacobson, and A. W. Johnson (2003). The theory and practics of simulated

annealing. In F. Glover and G. Kochenberger (Eds.), Handbook on Metaheuristics, pp. 287–

320. Norwell, Massachusetts: Kluwer Academic Publishers.

Hifi, M. (2006). MMKP problems available at the website. http://www.laria.u-

231 picardie.fr/hifi/OR-Benchmark/MMKP/.

Hifi, M., M. Michrafy, and A. Sbihi (2004). Heuristic algorithms for the multiple-choice multidi-

mensional knapsack problem. Journal of Operational Research Society 55(12), 1323–1332.

Hifi, M., M. Michrafy, and A. Sbihi (2006). A reactive local search-based algorithm for the

multiple-choice multi-dimensional knapsack problem. Computational Optimization and Ap-

plications 33(2-3), 271–285.

Hill, R. R. (1998). An analytical comparison of optimization problem generation methodologies.

In D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S. M. eds (Eds.), Proceedings of the

1998 Winter Simulation Conference, pp. 609–615.

Hill, R. R. and C. Hiremath (2005). Improving genetic algorithm convergence using prob-

lem structure and domain knowledge in multidimensional knapsack problems. International

Journal of Operational Research 1(1/2), 145–159.

Hill, R. R. and C. H. Reilly (2000). The effects of coefficient correlation structure in two-

dimensional knapsack problems on solution procedure performance. Management Sci-

ence 46(2), 302–317.

Hiremath, C. S. and R. R. Hill (2007). New greedy heuristics for the multiple-choice multidi-

mensional knapsack problem. International Journal of Operational Research 2(4), 495–512.

Hoff, A., A. Lokketangen, and I. Mittet (1996). Genetic algorithms for 0/1 multidimensional

knapsack problems. In Proceedings Norsk Informatikk Konferanse, NIK ‘96, Molde College,

Britveien 2, 6400 Molde, Norway, pp. 291–301.

Hooker, J. N. (1994). An empirical science of algorithms. Operations Research 42(2), 201–212.

Hung, M. S. and J. C. Fisk (1978). An algorithm for zero-one multiple knapsack problems. Naval

Research Logistics Quarterly 25(3), 571–579.

Iman, R. and W. Conover (1982). A distribution free approach to inducing rank correlation

among input variables. Communications in Statistics: Simulation and Computation 11(3),

311–334.

232 Kellerer, H., U. Pferschy, and D. Pisinger (2004). Knapsack Problems. Springer-Verlag.

Khan, S., K. F. Li, E. G. Manning, and M. Akbar (2002). Solving the knapsack problem for adap-

tive multimedia systems. Studia Informatica, Special Issue on Combinatorial Problems 2(2),

157–178.

Kozanidis, G., E. Melachrinoudis, and M. M. Solomon (2005). The linear multiple choice knap-

sack problem with equity constraints. International Journal of Operational Research 1(1/2),

52–73.

Lardeux, A., G. Knippel, and G. Geffard (2003). Efficient algorithms for solving the 2-layered

network design problem. In Proceedings of the International Network Optimization Confer-

ence, Evry, France, pp. 367–372.

Lau, H. C. and M. K. Lim (2004). Multi-period multi-dimensional knapsack problem and its

applications to available-to-promise. In Proceedings of the International Symposium on

Scheduling (ISS), Hyogo, Japan, pp. 94–99.

Lee, J. S. and M. Guignard (1988). An approximate algorithm for multidimensional zero-one

knapsack problems-a parametric approach. Management Science 34(3), 402–410.

Li, V. C. (2005). Tight oscillations tabu search for multidimensional knapsack problems with

generalized upper bound constraints. Computers and Operations Research 32(11), 2843–

2852.

Li, V. C. and G. L. Curry (2005). Solving multidimensional knapsack problems with general-

ized upper bound constraints using critical event tabu search. Computers and Operations

Research 32(4), 825–848.

Li, V. C., G. L. Curry, and E. A. Boyd (2004). Towards the real time solution of strike force asset

allocation problems. Computers and Operations Research 31(2), 273–291.

Lin, E. Y. and C. Wu (2004). The multiple-choice multi-period knapsack problem. Journal of

Operational Research Society 55(2), 187–197.

233 Løkketangen, A. (1995). A comparison of genetic algorithm and a tabu search method for 0/1

multidimensional knapsack problems. In Proceedings of the 1995 Nordic Operations Re-

search Conference, University of Iceland, Reykjavik.

Løkketangen, A. and F. Glover (1998). Solving zero-one mixed integer programming problems

using tabu search. European Journal of Operations Research 106(2), 624–658.

Loulou, R. and E. Michaelides (1979). New greedy-like heuristics for the multidimensional 0-1

knapsack problem. Operations Research 27(6), 1101–1113.

Lucasius, C. B. and G. Kateman (1997). Towards solving subset selection problems with the aid

of the genetic algorithm. In R. Manner and B. Manderick (Eds.), Parallel Problem Solving

from Nature, Volume 2, Amsterdam, pp. 239–247. Elsevier Science Publishers.

Magazine, M. and O. Oguz (1984). A heuristic algorithm for the multidimensional zero-one

knapsack problem. European Journal of Operational Research 16(3), 319–326.

Martello, S., D. Pisinger, and P. Toth (1999). Dynamic programming and strong bounds for the

0-1 knapsack problem. Management Science 45(3), 414–424.

Martello, S. and P. Toth (1985). Algorithm 632 a program for the 0-1 multiple knapsack problem.

ACM Transactions on Mathematical Systems 11(2), 135–140.

Martello, S. and P. Toth (1997). Upper bounds and algorithms for hard 0-1 knapsack problems.

Operations Research 45(5), 768–778.

Martello, S. and P. Toth (2003). An exact algorithm for the two-constraint 0-1 knapsack problem.

Operations Research 51(5), 826–835.

Mitchell, M. (1996). An introduction to genetic algorithms. MIT Press.

Moser, M., D. P. Jokanovic, and N. Shiratori (1997). An algorithm for the multidimensional

multiple-choice knapsack problem. IEICE Transactions in Fundamentals E80-A(3), 582–

589.

Osorio, M. A. and E. G. Hernandez (2004). Cutting analysis for MKP. In Proceedings of the Fifth

234 Mexican International Conference, School of Computer Science, Universidad Autonoma de

Puebla, Mexico, pp. 298–303.

Parra-Hernandez, R. and N. Dimopoulos (2005). A new heuristic for solving the multi-choice

multidimensional knapsack problem. IEEE Transactions on Systems, Man, and Cybernetics

- Part A: Systems and Humans 35(5), 708–717.

Pierce, J. F. (1968). Application of combinatorial programming to a class of all zero-one integer

programming problems. Management Science 15(3), 191–209.

Pirkul, H. (1987). A heuristic solution procedure for the multiconstraint zero-one knapsack prob-

lem. Naval Research Logistics 34(2), 161–172.

Pisinger, D. (1995). A minimal algorithm for the multiple-choice knapsack problem. European

Journal of Operational Research 83(2), 394–410.

Pisinger, D. (1997). A minimal algorithm for the 0-1 knapsack problem. Operations Re-

search 46(5), 758–767.

Pisinger, D. (1999a). Core problems in knapsack algorithms. Operations Research 47(4), 570–

575.

Pisinger, D. (1999b). An exact algorithm for large multiple knapsack problems. European Jour-

nal of Operational Research 114(3), 528–541.

Pisinger, D. (2001). Budgeting with bounded multiple-choice constraints. European Journal of

Operational Research 129(3), 471–480.

Raidl, G. R. (1999). The multiple container packing problem: A genetic algorithm approach with

weighted codings. ACM SIGAPP Applied Computing Review, ACM Press 7(2), 22–31.

Rardin, R. L. and R. Uzsoy (2001). Experimental evaluation of heuristic optimization algorithms:

A tutorial. Journal of Heuristics 7(3), 261–304.

Reeves, C. R. (1993). In Modern Heuristic Techniques for Combinatorial Problems. John Wiley

and Sons.

235 Reilly, C. H. (2006). Synthetic optimization-problem generaion: Show us the correlations! Sub-

mitted for review in the International Journal of Communications.

Renner, G. and A. Ekart (2003). Genetic algorithms in computer-aided design. Elsevier

Computer-Aided Design 35(8), 709–726.

Roli, A. (2005). How to solve it? An invitation to metaheuristics. Dipartimento di

Scienze, Universita degli Studi G. DAnnunzio Chieti. Powerpoint Presentation Slides.

http://www.sci.unich.it/ aroli/dida/ iasc/lucidi0405/seminario-mh.4perpage.pdf.

Romaine, J. M. (1999). Solving the multidimensional multiple knapsack problem with packing

constraints using tabu search. AFIT/GOR/ENS/99M-15, Air Force Institute of Technology,

Wright-Patterson Air Force Base, Ohio.

Senju, S. and Y. Toyoda (1968). An approach to linear programming with 0-1 variables. Man-

agement Science 15(4), 196–207.

Shih, W. (1979). A branch and bound method for the multiconstraint zero-one knapsack problem.

The Journal of the Operational Research Society 30(4), 369–378.

Sinha, P. and A. A. Zoltners (1979). The multiple-choice knapsack problem. Operations Re-

search 27(3), 503–515.

Soyster, A. L., B. Lev, and W. Slivka (1978). Zero-one programming with many variables and

few constraints. European Journal of Operational Research 2(3), 195–201.

Thesen, A. (1975). A recursive branch and bound algorithm for the multidimensional knapsack

problem. Naval Research Logistics Quarterly 22(2), 341–353.

Thiel, J. and S. Voss (1993). Some experiences on solving multiconstraint zero-one knapsack

problems with genetic algorithms. INFOR 32(4), 226–242.

Toyoda, Y. (1975). A simplified algorithm for obtaining approximate solutions to zero-one pro-

gramming problems. Management Science 21(12), 1417–1427.

Vasquez, M. and J. Hao (2001). A hybrid approach for the 0-1 multidimensional knapsack prob-

236 lem. In Proceedings of the 17th International Journal Conference on Artificial Intelligence

(IJACAI-01), pp. 328–333.

Weingartner, H. M. and D. N. Ness (1967). Methods for the solution of the multidimensional 0/1

knapsack problem. Operations Research 15(1), 83–103.

Wolsey, L. A. (1998). Integer Programming. A Wiley-Interscience Publication, John Wiley and

Sons, Inc.

Zalzala, M. S. and P. J. Fleming (Eds.) (1997). Genetic Algorithms in Engineering Systems.

London: Institution of Electrical Engineers.

Zeisler, N. J. (2000). A greedy multiple-knapsack heuristic for solving air mobility command’s

intratheater airlift problem. AFIT/GOR/ENS/00M-21, Air Force Institute of Technology,

Wright-Patterson Air Force Base, Ohio.

Zemel, E. (1984). An o(n) algorithm for the linear multiple choice knapsack problem and related

problems. Information Processing Letters 18(3), 123–128.

Zionts, S. (1972). Generalized implicit enumeration bounds on variables for solving linear pro-

gram with zeroone variables. Naval Research Logistics Quarterly 19(1), 165–181.

237