UNIVERSITY OF SOUTHAMPTON

FACULTY OF BUSINESS, LAW AND ART Southampton Business School

From archaeology to 3D printing: Packing problems in the three

by

Carlos Lamas Fern´andez

Thesis for the degree of Doctor of Philosophy

June 2018

UNIVERSITY OF SOUTHAMPTON

ABSTRACT

FACULTY OF BUSINESS, LAW AND ART Southampton Business School

Doctor of Philosophy

FROM ARCHAEOLOGY TO 3D PRINTING: PACKING PROBLEMS IN THE THREE DIMENSIONS by Carlos Lamas Fern´andez

This thesis is a study on three cutting and packing problems involving irregular items. These problems are highly relevant in areas such as transportation, additive manufac- turing or the garment industry. We investigate a special type of one-dimensional appearing in the industry; a novel problem in two dimensions entailing irregular shapes and free rotations; and an open problem in three dimensions. Our aims are to find strategies to deal with irregular shapes, particularly geometric tools, and solution methods for problems with unusual constraints.

In the first part we look at an industrial problem related to the management of helicopter fleets. We model and test with realistic data a bin packing problem where the objective is to find the minimum aircraft needed to lift a collection of items. The characteristics of this problem allow us to relax the geometrical constraints and consider it as a variant of the one-dimensional bin packing problem, but its many problem specific constraints make this a multi-objective that, to the best of our knowledge, is new in the literature.

In the second part, we deal with a novel problem in two dimensions, motivated by the deciphering of an ancient Aztec codex. The problem itself is a novel packing prob- lem with irregular shapes, an irregular container, free rotation and with the overlap and containment constraints relaxed. We provide a constructive algorithm and a meta- heuristic procedure that are able to find satisfying solutions for an open question in the deciphering of the codex.

Finally, in the last part we treat three-dimensional irregular shapes. We adopt a discretised approach that allows us to generate quick intersection tests and we develop the no-fit voxel. This is an extension of the no-fit polygon, a mainstream tool for two- dimensional packing problems that had not been extended to three dimensions in the literature. Using this tool, we investigate local search neighbourhoods and metaheuristic algorithms to find efficient packings and are able to provide an ILP model based on the no-fit voxel to locally improve the packing layouts.

Contents

Declaration of Authorship xv

Acknowledgements xvii

1 Introduction1 1.1 Objectives and contribution...... 3 1.2 Layout...... 4

2 Literature Review5 2.1 Overview of C&P problem types...... 6 2.1.1 Output maximisation...... 6 2.1.1.1 ...... 7 2.1.1.2 Identical item packing problem...... 7 2.1.1.3 Placement problem...... 7 2.1.2 Input minimisation...... 8 2.1.2.1 Bin packing problem...... 9 2.1.2.2 ...... 9 2.1.2.3 Open dimension problem...... 9 2.1.3 Benefits of the typology...... 10 2.2 One-dimensional literature...... 10 2.2.1 Exact methods...... 11 2.2.2 Approximation algorithms...... 14 2.3 Two-dimensional literature...... 15 2.3.1 Regular...... 15 2.3.2 Irregular...... 16 2.3.2.1 Exact methods...... 17 2.3.2.2 Metaheuristics...... 19 2.4 Three-dimensional literature...... 22 2.4.1 Regular...... 23 2.4.2 Irregular...... 24

3 Methodology 27 3.1 Geometry representations...... 27 3.1.1 Phi-objects...... 28 3.1.2 Polygonal representations...... 31 3.1.2.1 Avoiding overlap in two-dimensions...... 32 Direct trigonometry...... 32 No-fit polygon...... 34

v vi CONTENTS

3.1.2.2 Avoiding overlap in three-dimensions...... 36 3.1.3 Discrete representations...... 38 Raster methods...... 39 Shape approximations...... 41 3.2 Optimisation...... 41 3.2.1 Complexity...... 42 3.2.2 Integer Models...... 43 3.2.3 Heuristics...... 44 3.2.3.1 Constructive algorithms...... 45 3.2.3.2 Local search...... 45 3.2.3.3 Approximation algorithms...... 46 3.2.4 Metaheuristics...... 47 3.2.4.1 Iterated local search...... 47 3.2.4.2 Simulated annealing...... 48 3.2.4.3 Tabu search...... 48 3.2.4.4 Variable neighbourhood search...... 49 3.2.4.5 Genetic algorithms...... 49 3.2.4.6 Other metaheuristics...... 50 3.2.5 Matheuristics...... 50 3.2.6 Hyper-heuristics...... 51 3.3 Conclusion...... 51

4 Efficient management of heterogeneous helicopter fleets 53 4.1 Introduction...... 53 4.2 Problem description...... 56 Bin packing...... 57 Placement constraints...... 57 Objectives...... 58 4.2.1 Bounds...... 60 4.2.2 Exact method...... 61 4.2.3 Constructive...... 64 4.2.4 Placement rules...... 64 4.2.5 Distance balance heuristics...... 66 4.2.6 Genetic Algorithm...... 66 4.2.6.1 Chromosome representation...... 67 4.2.6.2 Crossover operator...... 67 4.2.6.3 Mutation...... 70 4.2.6.4 Initial population...... 70 4.2.6.5 Fitness function...... 70 4.2.6.6 Algorithm description...... 71 4.2.6.7 Parameters...... 71 4.2.7 Heterogeneous bins...... 73 4.3 Computational experiments...... 74 4.3.1 Randomly generated instances...... 75 4.3.2 Realistic instance...... 77 4.4 Conclusions...... 78 CONTENTS vii

5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 81 5.1 Introduction...... 82 5.1.1 Historical context...... 82 5.1.1.1 Acolh´uaarithmetic and geometry...... 83 5.1.1.2 Geographical location of the terrains...... 84 5.1.2 Problem description...... 84 5.1.3 Related work...... 87 5.2 Solution methods...... 88 5.2.1 Constructive algorithm...... 88 Finding potential placements...... 89 5.2.1.1 Alternative objective function...... 91 5.2.2 Local search...... 92 5.2.3 Genetic algorithm...... 94 5.2.3.1 Encoding & decoding...... 95 5.2.3.2 Fitness...... 96 5.2.3.3 Crossover...... 96 5.2.3.4 Mutation...... 96 5.3 Implementation and Computational results...... 96 5.3.1 Irregular strip packing instances...... 97 5.3.2 Real instance...... 97 5.4 Conclusion and further work...... 99

6 Voxel-Based 3D Irregular Packing 103 6.1 Introduction...... 104 6.2 Literature review...... 104 6.3 Voxelised three-dimensional packing...... 105 6.3.1 Problem description...... 106 6.3.2 Voxel representation...... 106 6.3.3 Constraint handling...... 106 6.4 ILP formulation...... 109 6.5 Building blocks of the 3D packing heuristics...... 111 6.5.1 Constructive algorithm...... 111 6.5.2 Sequence-based neighbourhoods...... 112 Sequence swap neighbourhood...... 112 Rule change neighbourhood...... 112 6.5.3 Layout-based neighbourhoods...... 113 Axis aligned direction neighbourhood...... 113 Enclosing cube neighbourhood...... 114 Piece swap neighbourhood...... 114 6.5.4 Strategic Oscillation...... 114 6.5.5 Objective function...... 115 6.6 Search algorithms...... 116 6.6.1 Iterated local search...... 116 6.6.2 Iterated Tabu Search...... 117 6.6.3 Variable Neighbourhood Search...... 119 6.7 Computational experiments...... 123 viii CONTENTS

6.7.1 Randomly generated instances...... 123 6.7.2 Instances from the literature...... 127 6.7.3 Realistic instances...... 130 6.7.4 Discussion...... 133 6.8 Conclusion...... 133

7 Conclusions 135

References 139 List of Figures

2.1 Output maximisation problem types. (Adapted from W¨ascher et al. (2007))6 2.2 Input minimisation problem types. (Adapted from W¨ascher et al. (2007))8 2.3 On the top row, two pieces p and q, and their no-fit polygon, NFP (p, q). The placement of the pieces is determined by their lower left corner point (highlighted). On the bottom row, two examples of the usage, a contact position on the right (q is placed in the boundary of NFP (p, q)) and an overlapping position on the left (q is placed in the interior of NFP (p, q)). 17 2.4 Approximation of a piece by inscribed circles (Extracted from Jones(2013)) 18 2.5 Layouts obtained packing the different sequences of a problem with three pieces with a bottom-left rule. The shorter packing length is given by the first packing sequence, ABC...... 20 2.6 Pairs of pieces matched using information from the no-fit polygon (Ex- tracted from Elkeran(2013))...... 22

3.1 Creation of more sophisticated phi-objects by composing various primary phi-objects...... 29 3.2 Basic objects: (a) convex polygon K, (b) circular segment D, (c) hat H, and (d) horn V. (Extracted from Chernov et al. (2012))...... 30 3.3 Example of a polygon (left) from the shapes0 instance (Oliveira et al., 2000) and a with triangulated faces (right) from a instance in Stoyan et al. (2005)...... 31 3.4 Intersection of two non-convex polygons: (a) polygons to intersect with their orientations, (b) classification of intersection points, (c) edges of resulting polygons and (d) resulting polygons...... 33 3.5 (a) Polygons P and Q with reference points highlighted, (b) inner loop of NFPPQ, (c) outer loop of NFPPQ, (d) NFPPQ ...... 35 3.6 Irregular piece represented as a binary matrix. (Extracted from Bennell & Oliveira(2008))...... 39 3.7 Raster representation with boundary definition. (Extracted from Segen- reich & Faria Braga(1986))...... 39 3.8 Pixel representation with information about overlap. (Extracted from Ramesh Babu & Ramesh Babu(2001))...... 40 3.9 Example of a model represented by a voxel octree. (Extracted from Schwarz & Seidel(2010))...... 41 3.10 Steps of a hill climb algorithm. It starts from the initial solution L0 and iteratively explores the neighbourhoods of the incumbent solution, ∗ N1,...,N4, until it arrives to the local optimum L ...... 46

4.1 Performance graph of two types of aircraft...... 59

ix x LIST OF FIGURES

4.2 Crossover example where rows represent the bins and the shaded blocks are the items...... 69

5.1 Example of one terrain from the codex. The original drawing (left) and the same drawing with sides to scale (right). The measurements of the sides are in T lalcuahuitl, that are equivalent to 2.5m. Sticks represent 1 unit, dots 20 units and arrows 0.5 units. Groups of 5 sticks are connected on the top (Williams & Jorge y Jorge, 2008)...... 83 5.2 The two possible shapes of the terrain from Figure 5.1 when the area information is taken into account...... 84 5.3 Aerial image of El Topote. Map data: INEGI, Google, DigitalGlobe 2018. 85 5.4 Example of the two test placements for a pair of vertices v from a shape and w from the container...... 89 5.5 The status of the layout after placing the first piece (left) and after placing the second piece on a vertex that is new in the updated container (right). 90 5.6 Layouts generated by the constructive algorithm for different α values. Top row, from left to right: α = 0.25, Ut = 42.4%; α = 0.50, Ut = 67.1% α = 0.75, Ut = 85.7%. Bottom row, from left to right: α = 0.85, Ut = 96%; α = 0.95, Ut = 99.1% α = 1, Ut = 97.3%...... 92 5.7 Layout generated by the constructive algorithm on the left (Ut = 97.3), and the resulting layout after applying local search on the right (Ut = 98.5). Local search stopped after 1000 non improving iterations and had values a = b = 5...... 94 5.8 The resulting layout after running the BRKGA for the instances Dighe1 (left) and Dighe2 (right)...... 98 5.9 The resulting layout after running the BRKGA for the instances Glass1 (left), Glass2 (centre) and Glass3 (right)...... 98 5.10 Summary of the utilisations found by the GA across 100 different runs with 100 generations each...... 99 5.11 Solution found by the GA with utilisation of 99.5958%...... 100 5.12 Solution found by the GA with utilisation of 99.5911%...... 100 5.13 Solution found by the GA with utilisation of 99.5734%...... 100 5.14 Solution found by the GA with utilisation of 99.5536%...... 100 5.15 Solution found by the GA with utilisation of 99.5355%...... 100

6.1 Example of an irregular piece represented by voxels with its reference voxel highlighted...... 106 6.2 Two arbitrary pieces, p and q and their no-fit voxel NFVp,q ...... 108 6.3 Neighbourhood of axis aligned directions, δ = 1 (left) and δ > 1 (right). The red point is the original reference point lp and the grey points are the possible reference points in the neighbourhood...... 113 6.4 Neighbourhood of enclosing cube, δ = 1. The grey points are the reference points in the neighbours, while the original reference point is located in the centre of the cube...... 114 6.5 Example of the steps to generate a ‘blob’ in two dimensions. In (a) 9 points are drawn randomly, in (b) they are connected in a closed loop, in (c) a Gaussian blur is applied (the values between 0 and 1 are represented by the shade of grey) and in (d) values are set to either 0 or 1, depending on a threshold value...... 124 LIST OF FIGURES xi

6.6 Three pieces from the medium instances with style ‘round’ (left), ‘neutral’ (centre) and ‘peaked’ (right)...... 126 6.7 Instance blobs9, best utilisations achieved our algorithms: 27.69% by ILS (left), 29.52% by ITS (centre) and 30.41% by VNS (right)...... 126 6.8 Instance Shapes 3D ...... 128 6.9 Best layout found for the instance Merged5 found by VNS, with a height corresponding height of 29.92 in the mesh representation...... 130 6.10 Comparison of best utilisation found by Egeblad et al. (2009) and VNS for the different Merged instances...... 130 6.11 Example piece from the Engine instance...... 131 6.12 Best layout found for the instance Engine by ILS (left), ITS (centre) and VNS (right)...... 132 6.13 Best layout found for the instance Chess by ILS (left), ITS (centre) and VNS (right)...... 132

7.1 Best layouts found for the Shapes0/Shapes1 instance with free rotation. For lengths 52 and 51, the overlap is less than 0.01% of the instance area. For the length 50 the overlap amounts to 0.06% of the instance area.... 137

List of Tables

4.1 Description of the items from the randomly generated instances...... 75 4.2 Description of the bins used in the randomly generated instances..... 76 4.3 Multistart and genetic algorithm results comparison for random instances 76 4.4 Comparison of the MILP model, the genetic algorithm and the multistart constructive for one homogeneous instance...... 77 4.5 Comparison of the algorithms performance for a realistic mission simu- lating an rifle company lifted by different configurations of A´erospatiale Pumas and Boeing CH-47 Chinooks with a minimum flight range of 500 km.. The genetic algorithm was executed with a population of 100 indi- viduals...... 78

5.1 Results obtained for instances of 2D irregular packing literature...... 97 5.2 Parameters used for solving the Topotitla instance...... 98

6.1 Instances solved and their features...... 123 6.2 Parameters used to generate the blobs instances...... 126 6.3 Comparison of results (final height, in voxels) for the blobs instances... 127 6.4 Comparison of results (Percentage of container utilisation) for the blobs instances...... 127 6.5 Comparison of results (final height) for the instances found in St05 (Stoyan et al., 2005) and 3DNEST (Egeblad et al., 2009)...... 129 6.6 Comparison of results (final height) for the instances found by St04 (Stoyan et al., 2004) and HAPE3D (Liu et al., 2015)...... 129 6.7 Results for the instance Shapes 3D ...... 129 6.8 Results for the instance Engine, after 10 runs of 1 hour each...... 131 6.9 Results for the instance Chess, after 10 runs of 1 hour each...... 132

xiii

Declaration of Authorship

I, Carlos Lamas Fern´andez , declare that the thesis entitled From archaeology to 3D printing: Packing problems in the three dimensions and the work presented in the thesis are both my own, and have been generated by me as the result of my own original research. I confirm that:

• this work was done wholly or mainly while in candidature for a research degree at this University;

• where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated;

• where I have consulted the published work of others, this is always clearly at- tributed;

• where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work;

• I have acknowledged all main sources of help;

• where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself;

• none of this work has been published before submission

Signed:......

Date:......

xv

Acknowledgements

I would like to thank my supervisors, Prof. Julia Bennell and Dr. Antonio Martinez- Sykora for their continued support during the writing of this thesis. I was very lucky to benefit from their extensive knowledge of cutting and packing and, more importantly, their very friendly approach to supervision; without them it would not have been possible (or enjoyable!) to write this thesis.

Part of this PhD included dealing with very applied problems, that needed external input. I would like to thank Michael Fox and Sqn Ldr O’Brien from Dstl for their help on shaping the problem description of our first research chapter. I am also very grateful to Dr. Marta Cabo Nodar, from the Mexico Autonomous Institute of Technology (ITAM), who was instrumental to our second research chapter. On top of her valuable contributions on the scientific side, she invited me to ITAM, drove me to Topotitla and introduced me to three academics from National Autonomous University of Mexico (UNAM), Dr. Mar´ıadel Carmen Jorge y Jorge, Dr. Clara Garza Hume and Dr. Arturo Olvera Ch´avez. Their knowledge about the codex of Vergara was also fundamental to our research, to all of them: thank you.

During my PhD I met many people who, in one way or another, has been important to my research. The list, certainly not exhaustive, includes the members from the ESICUP working group. Their meetings have been an amazing place to share my findings and learn about the state-of-the-art cutting and packing research. Also (now Dr.) Ranga Abeysooriya, with whom I shared research ideas, concerns, successes and adventures across Europe; the PhD journey would have been much more lonely without his company. My office mates and the staff from the Southampton Business School, who I always found to be very supportive.

I would like to thank my family, my parents and my sister, for their support all these years. But this list would be far from complete without mentioning Vera, who has been by my side all this time. She has been an endless source of inspiration, joy and understanding; and it is difficult to imagine finishing this thesis without her continued support.

Finally, I want to thank the Southampton Business School for funding my PhD and the great support it has given me as a student. Also, I would like to warmly thank my PhD examiners, Dr. Francisco Parre˜noand Dr. Stefano Coniglio, for an interesting discussion during the Viva and their many helpful comments on my work.

xvii

Chapter 1

Introduction

Cutting and packing is an extensive knowledge area and, while it has been widely re- searched in the past decades, some topics seem to have been elusive in the literature. In this category we find, among others, problems with very specific constraints linked to certain industrial applications, the handling of rotations in two-dimensional irregular problems and the handling of irregular objects in three-dimensional packing. In this work, we focus our attention in these three topics, and devote one research chapter to each of them.

On first sight, these problems might seem diverse and address different objectives, but in this study they have an important characteristic in common: dealing with irregular shapes. This is a key similarity, for example, when examining geometric representations and tools for two and three-dimensional problems. Furthermore, in all of these problems we will propose different strategies to approximate and simplify, where possible, the difficulties arising from these irregularity.

Before going in depth into the specific objectives of the thesis, let us give a brief introduction to cutting and packing and its applications.

Cutting and packing problems have been studied in the literature since the thirties (Kantorovich, 1960)1 and more intensely in the sixties (Gilmore & Gomory, 1963) due to their importance in a number of industries such as garment, wood, glass or paper manufacturers or service providers such as transportation companies. At the time, Kan- torovich was concerned with a number of problems related to optimise the production processes across the nation, a mater that he argued was exclusive to the Soviet econ- omy. One of them was the reduction of the waste material generated when dividing raw material for its usage, a core application of cutting and packing. It turned out that these problems did not only appeal to the Soviet Union and, a few years later, Gilmore and Gomory’s work also studied similar problems, with their work (Gilmore & Gomory,

1Kantorovich’s work was published in Management Science in 1960 (received two years earlier), but the original document is dated from 1939.

1 2 Chapter 1 Introduction

1963) specifically centred around cutting paper rolls in order to minimise waste. The paper industry is one of the key application areas of one-dimensional packing, that finds similar problems in the wood cutting industry. A common objective in these problems is to decide how to cut a number of customer orders from the minimum amount of rolls of raw material, minimising wasted material and cost, what is called a bin packing prob- lem. Sometimes all the items cannot be packed or cut and the objectives can include deciding what items to pack and what items to leave unpacked in order to maximise a profit function, in that case the problem is called a knapsack problem.

Nowadays, while one-dimensional problems are still very relevant in theory and prac- tice, the research field is mature. More recently, there is a growing body or research devoted to problems in two and three dimensions where the physical shape of the objects and their geometrical location is important.

In two dimensions, the packing of irregular shapes has applications in the garment industry, as well as leather and glass cutting, among others. Again, the problem has many variants, but they often coincide in the objective of reducing waste. The most abundant in the literature is the variant of strip packing. In this problem, a set of irregular pieces need to be accommodated in a strip of fixed height and arbitrary length and the objective is to reduce the length of the strip; a problem directly applicable when manufacturing clothes from long rolls of fabric with a fixed width, where the placement of the patterns has a great impact on the waste generated when cutting.

In three dimensions, the packing of rectangular boxes has been extensively studied under the name of container loading. This is a particular case of three-dimensional irregular packing, where all the pieces have a rectangular shape and appears very often in the transport industry. However, some problems such as 3D printing or component layout design depend on the irregular shape of the pieces and cannot be easily simplified, creating the need for geometry representations and packing algorithms that can handle the general case.

Nevertheless, higher dimensional problems do not always have geometric considera- tions. One example are resource allocation problems. In these kind of problems, a set of machines need to be assigned different tasks. Each of the machines has a number of capacities that cannot be exceeded and each task has a certain requirement in each of them. The problem of combining the tasks in order to make better use of each of the machines is not straightforward and can be modelled as a multi-dimensional bin packing problem. This is an important consideration to make, since sometimes higher dimen- sional problems can be reduced to lower dimensions, or at least avoid their geometric considerations, greatly simplifying them. We use this approach in Chapter4.

Finally, there is an enormous variety of packing problems that arise from including additional constraints, often linked to practical applications. For example problems linked to delivery can have last-in-first-out constraints or weight limits (see for example Chapter 1 Introduction 3 in Bortfeldt & W¨ascher(2013)); while component layout design might include stability or behavioural constraints (see for example Kovalenko et al. (2015)). In the course of the thesis, we tackle problems in one, two and three dimensions; and also include problem-specific constraints. We look in detail at our objectives in the next section.

1.1 Objectives and contribution

The aim of this thesis is to contribute to the field of Cutting and Packing by developing new methods to tackle some applied problems that are new or that have received very little attention in the literature. They all involve irregular shapes, which we treat in an approximate way.

Our objectives can be summarised as follows:

• Develop different methods for handling irregular shapes where the geometry is non-critical (Chapter4), uncertain (Chapter5) or too complex (Chapter6).

• Develop methods for solving specific cutting and packing problems that include unusual constraints.

• Define a new type of cutting and packing problem where data is uncertain and the solution quality is subjective or, at least, difficult to quantify.

• Develop new geometric methods for handling 3D irregular packing problems.

To this end, the main contributions of the thesis have been structured in the format of three papers:

Paper 1:

In this work, we solve a real industry problem proposed by Dstl. The aim is to assist in the decision making involved in the strategic management of a helicopter fleet. This is done by means of a multi-objective multi-dimensional bin packing problem, where troops and cargo are allocated to a fleet of mixed aircraft in order to evaluate the requirements of diverse missions. In this case, the geometrical aspects of the packing are not critical to the solution and are relaxed; however some problem specific constraints such as the optional positioning of items (inside the helicopters or underslung) have to be considered, maintaining the multi-dimensional characteristics of the problem. We provide a mixed- integer linear program formulation for the problem and solve it to optimality for small instances. For larger instances, we develop a constructive and a metaheuristic algorithm that achieve competitive results in short computational times. 4 Chapter 1 Introduction

Paper 2:

This study deals with a novel interdisciplinary application for cutting and packing. We build on work done by archaeologists and specialised mathematicians on the deciphering of an ancient Aztec codex depicting drawings of agricultural terrains. To answer the open question of their potential location within a geographical area, we design a constructive algorithm and a metaheuristic that are able to work with relaxed cutting and packing constraints (containment and overlap) as well as dealing with free rotations. We test the packing capabilities of the algorithm in well known irregular packing instances and provide successful potential layouts for the terrains of the codex.

Paper 3:

In this research paper we deal with the geometrical aspects of three-dimensional pack- ing. We describe a discretised approach to the 3D irregular strip packing problem. To this end, we adapt the idea of the no-fit polygon (Art, 1966) to voxelised three- dimensional objects. This tool allows us to formulate the packing problem as an integer linear program. However, to solve meaningful instances, we also propose new meta- heuristics that make use of both, the exact model and the no-fit voxel. We test our results in a range of benchmark instances, some of them generated by us and some of them found on the literature. We find our results to be competitive with pre-existing literature in most cases and with potential to be applied to real-world problems.

1.2 Layout

The remainder of the thesis is organised as follows. Chapter2 has an overview of the published works in the area of cutting and packing relevant to this thesis. In Chapter 3 we discuss some methodological aspects important for our research, namely, geometry and optimisation techniques. In Chapters4,5 and6, we present our first, second and third research papers respectively. Finally, in Chapter7 we present our concluding remarks and summarise the contributions of this thesis. Chapter 2

Literature Review

Cutting and packing problems are, in a nutshell, problems that involve the assignment of a set of small objects to one or various larger objects in order to be packed or cut. They are subject to two basic constraints: that the smaller objects cannot overlap each other, and that they must be fully contained within the larger ones. This general def- inition covers problems that appear in a variety of industry processes such as deciding how to cut large paper rolls to meet demand from various clients, how to place garment patterns in a piece of fabric or how to arrange a few objects within a 3D printer before printing them. The literature on this topic is very broad, and papers can be classified by a number of different criteria, for example by the dimension of the objects to be packed, by the assort- ment of objects or by the objective pursued. In an attempt to bring all these problems under the same umbrella, W¨ascher et al. (2007) developed a cutting and packing typol- ogy. They divide the problems in two main types types, depending on their objective: input minimisation and output maximisation. In Section 2.1 we describe briefly these basic problem types and their main subtypes, and point to recent reviews about them if they are available. This thesis is structured around three research chapters that refer to problems in one, two and three dimensions respectively. While each of these chapters contains its own brief literature review related to the specific topic, we complement them here, giving a wider overview of the literature.

In Section 2.2 we give a brief review of one-dimensional packing problems. In Section 2.3, we deal with two-dimensional problems. We review briefly regular problems and, in more detail, irregular packing problems. Finally, in Section 2.4 we review the most relevant literature in regular and irregular three-dimensional packing problems.

5 6 Chapter 2 Literature Review

2.1 Overview of C&P problem types

In W¨ascher’s typology, problems are described in terms of large and small objects. Large objects are sometimes called bins or containers and are the ones used to accommodate small objects. The small objects (that can be called pieces, items or simply ‘objects’ depending on the application) are the ones that need to be placed in or cut from the large items. For example, in the classical knapsack problem one decides what to carry on a knapsack (or backpack) to optimise the trade-off between value and capacity. In this example, the large item would be the knapsack, while the small objects would be all the objects that are available to carry in it. At the top layer, the typology distinguishes two groups of problems depending on their objective: output maximisation and input minimisation. In the output maximisation problems, the objective is driven by maximising the value of the small objects (for example, by deciding on their placement and selection, like in the knapsack example) while in input minimisation problems, the objective is driven by minimising the usage of large objects (for example, by using the smallest possible container to pack something). Within these two groups, there are further refinements that we review next.

2.1.1 Output maximisation

In this class of problems the objective is to optimise the value of a packing by means of selecting the adequate assortment of small items.

Figure 2.1: Output maximisation problem types. (Adapted from W¨ascher et al. (2007))

In this category, W¨ascher et al. (2007) distinguishes three types of problems (Fig- ure 2.1), depending on the diversity of shapes of the small items, these could be all equal (identical item packing problem), of a few types (placement problem), or most of them have different shapes (knapsack problem). Chapter 2 Literature Review 7

2.1.1.1 Knapsack problem

In the knapsack problem the quantity of large objects is limited and the objective is to find a selection of maximal profit small items and an arrangement within the large object(s). In its classical form, it can be formulated as follows:

N X maximise cixi (2.1) i=1 subject to : (2.2) N X wixi ≤ W (2.3) i=1

xi ∈ {1, 0}, ∀i = 1,...,N (2.4)

Where N is the number of items and the binary variables xi indicate if item i is placed in the knapsack (xi = 1) or not ((xi = 0). The weights ci and wi represent the value and weight of item i respectively, and W is the capacity of the knapsack. Martello et al. (2000) provides a review of some of the most efficient methods to solve the problem, while Pisinger(2005) proposes new (harder) instances for the problem. Of course, the knapsack problem has been generalised to higher dimensions, see for example Egeblad & Pisinger(2009) for its three-dimensional extension.

2.1.1.2 Identical item packing problem

In the identical item packing problem the objective is to pack the maximum amount of identical items inside a container. A good example of this is the manufacturers pal- let loading problem, reviewed in detail in Silva et al. (2016). In this problem a set of rectangular boxes need to be packed in layers in a pallet of fixed dimensions, and only orthogonal rotations are allowed. This problem has been solved efficiently to optimality for most of the classic instances in the literature (Dowsland, 1987; Birgin et al., 2010), but as it is noted in Silva et al. (2016), the analysis of its complexity and some classes of instances still remain an open research topic.

2.1.1.3 Placement problem

The placement problem is very similar to the knapsack problem with one container, with the difference that the objects to be placed are weakly heterogeneous (i.e. there 8 Chapter 2 Literature Review is a large repetition of the same items). Apart from the one-dimensional version, two- dimensional versions of the problem have been studied with (Hadjiconstanti- nou & Christofides, 1995; Scheithauer & Sommerweiß, 1998) and in three dimensions, where it is known as the Single Container Loading Problem (W¨ascher et al., 2007). The Single Container Loading Problem consists in the placement of boxes in a single large container, maximising the profit. Its importance will be highlighted in Section 2.4, since it not only has practical importance on its own, but it is often used as a part of exact procedures to solve bin packing problems.

2.1.2 Input minimisation

In the input minimisation problems, the objective is to minimise the cost of the large items necessary for accommodating a fixed amount of small items. This means that all the small items have to be placed or cut, but the way it is done has to be chosen carefully to optimise a certain objective.

Figure 2.2: Input minimisation problem types. (Adapted from W¨ascher et al. (2007))

For this category, W¨ascher et al. (2007) also distinguishes the problems based on the shapes of the small items (Figure 2.2). In the bin packing problem these are very diverse while in the cutting stock problem only a few item types are available. The other type of problem, the open-dimensional problem, studies the case where the shape of the large container can be modified and is part of the objective function. Chapter 2 Literature Review 9

2.1.2.1 Bin packing problem

The bin packing problem is one of the classical problems in operational research. It has been studied from the thirties (Kantorovich, 1960) due to its practical importance. It involves the assignment of a set of small items into bins with different cost, with the aim of minimising the total cost of packing (or cutting). We will provide more formal definition and review some relevant literature about the bin packing problem in Section 2.2. Furthermore, a state-of-the-art review about this problem can be found in Delorme et al. (2016).

2.1.2.2 Cutting stock problem

The cutting stock problem is very similar to bin packing and can be seen in fact as a variation of it (Delorme et al., 2016), with the difference that the items to be packed are now grouped in identical types and assigned demands. This distinction becomes handy when the assortment of items to be packed is weakly heterogeneous, as it can help to have different formulations to solve the problem.

2.1.2.3 Open dimension problem

In the open dimension problem there is only one large item, but one of its dimensions is not fixed. One variant of this problem that appears often in the literature is the two- dimensional irregular version, called irregular strip packing problem, or nesting. This problem is of great practical relevance in industries such as garment or metal. Since we will refer often to this problem in the thesis, let us introduce a more formal definition.

We consider a set of N pieces P = {p1, . . . , pN } represented by simple polygons and a rectangular container C(L) with fixed height H and variable length L, whose bottom left corner is placed at (0, 0). If we denote by pi(xi, yi) the translation of polygon pi by the point (xi, yi), the nesting problem can be defined as follows:

minimise L (2.5) subject to : (2.6)

pi(xi, yi) ∩ C(L) = pi(xi, yi) i = 1,...,N (2.7)

pi(xi, yi) ∩ pj(xj, yj) = ∅ ∀i, j ∈ {1,...,N} (2.8)

xi, yi ∈ R i = 1,...,N (2.9) L ∈ R (2.10) 10 Chapter 2 Literature Review

The objective is to find the smallest length container that can accommodate all the pieces inside (equation (2.8)) without them overlapping each other (equation (2.9)). Note that at this point we do not consider how these two constraints can be modelled in a practical way, as we will investigate solving methods for this problem in Chapter3. A good review of publications about the nesting problem is given in Bennell & Oliveira (2009).

2.1.3 Benefits of the typology

This typology provides a framework that organises cutting and packing problems into categories. It helps to highlight the relevance of the different areas and to identify the gaps where more research is needed. To date this typology has been cited around 500 times according to Web of Science, over a thousand according to Google Scholar. This is a clear indication of its relevance. The vast majority of papers published in the last decade have identified themselves within this typology and, therefore, research has been very structured since then. In our case, we identify our first research paper, Chapter4, with a multi objective variant of the multiple bin size bin packing problem. Our second research paper, Chapter5, is not a conventional packing problem, making it difficult to identify with a single type. Nonetheless, the typology was still useful to bring together the relevant research, in this case the 2D irregular, needed to advance in this area. Finally, our third research paper, Chapter6, is devoted to the minimisation of the height of a three-dimensional container, and therefore the specific problem type is 3D irregular open dimension problem.

2.2 One-dimensional literature

One-dimensional packing problems are of great importance in operational research. Since they do not consider the geometric constraints of their higher dimensional counterparts, their meaning is more abstract and can be applied to a wide range of fields. The knapsack problem (Martello & Toth, 1990a), together with bin packing have gathered a lot of attention. Both problems are NP-hard (Martello & Toth, 1990a), however researchers have designed clever algorithms to find solutions for reasonable size instances. For example, the classical knapsack problem can be solved in pseudo-polynomial time by dynamic programming (Bellman, 1957). There is a large body of literature dealing with one-dimensional problems. In this thesis, we consider a bin packing problem (Chapter4) and, for this reason, we centre our efforts in reviewing the literature available for the bin packing problem. In this section we review the classical problem and the different approaches available to solve Chapter 2 Literature Review 11 it, while in the smaller review in Chapter4 we focus on the extensions of the problem relevant to the specific problem that we are solving.

The classical bin packing problem consists in assigning a collection of n items with sizes wi ≤ 1, i = 1, . . . , n to the minimum number of bins of capacity 1. Each item has to be assigned to exactly one bin and, of course, bins cannot exceed their capacity. Let B be known upper bound on the number of bins needed to pack all the items. If we use the binary variables yj to denote if bin j is needed or not, and the binary variables xij to determine if item i is assigned to bin j or not, we can formulate the problem as follows:

B X minimise yj, (2.11) j=1 s.t.

n X wixij ≤yj, j = 1,...,B (2.12) i=1 B X xij =1, i = 1, . . . , n (2.13) j=1

xij, yj ∈{0, 1}, i = 1, . . . , n j = 1,...,B (2.14)

This problem has been shown to be NP-hard in the seventies (Garey & Johnson, 1979) and there has been a lot of interest in its research, both for exact and approximation algorithms. As mentioned in Section 2.1.2, bin packing is closely related to the cutting stock problem, and therefore some of the references given below were initially intended for cutting stock problems, but are equally applicable for bin packing problems.

2.2.1 Exact methods

The most natural way to solve to optimality bin packing problems is, perhaps, to gen- erate a tree search that will lead to the optimal solution by enumeration. One of the earliest algorithms of this kind is due to Eilon & Christofides(1971). They propose a ILP formulation, similar to (2.12)-(2.14). The only difference is that they do not use the variables yj, instead they assign weights in the objective function to the xij variables, that increase as their j value (the bin where they are) increases. Their formulation is then solved by the Balas’ Additive Algorithm (Balas et al., 1965). Balas’ algorithm works by enumerating solutions in the form of a tree search. This and other tree-search algorithms are characterised by fixing the values of some variables at some points called nodes and ‘branching’ on them. In the case of Balas’ algorithm, for example, nodes split into two branches, in one the variable is fixed to 0 and in the other to 1. Some branches 12 Chapter 2 Literature Review will not be worth exploring, because it can be demonstrated that they cannot lead to optimal (or even to feasible) solutions. One of the main differences among tree-search algorithms is the way they decide on the branching strategies and the procedure they use to discard some nodes that will not lead to the optimal solution. To deal with the latter, Martello & Toth(1990b) define a dominance criterion between subsets of items whose sum is below the capacity of a bin.

A set S1 is said to dominate another set S2 if all the items in S2 can be grouped in such a way that the sum of their size in each group is equal to a unique item from S1. They prove that a solution that assigns S1 to a bin is not worse than a solution that assigns instead the set S2 to that bin. This result can be used to reduce the size of the branching trees in exact algorithms and this is precisely the technique the same authors use for the MTP algorithm. In this algorithm (Martello & Toth, 1990a), items are assigned to bins ordered by decreasing weight. At each node, several branches are created by assigning the next item to all the open bins, plus a node that opens a new one. Of course, some nodes are not explored if the lower bound of the assignment is higher or equal to any solution found so far. In certain nodes, a reduction procedure based on the dominance criteria will be used to place some items and a range of simple heuristics such as First Fit Decreasing will be applied to complete the solution and generate new upper bounds. A few years later, Korf(2003) published another branching algorithm – called Bin- Completion, BC – that makes use of the same dominance relations from Martello & Toth(1990b). In this case, however, the tree evolves in a different manner. Each child node is a different assignment of items to a bin and each level of the tree is a new bin. Since the possibilities for child nodes at any point are large (in principle, all subsets of the remaining items whose sum does not exceed the bin capacity), they largely benefit from applying dominance relations to reduce this number. An efficient implementation of this idea, allowed Korf(2003) to outperform the results from the MTP algorithm. This same algorithm has been revisited in Schreiber & Korf(2013). This new version – IBC, Improved Bin-Completion – introduced some improvements. These include gen- erating only a fixed number of branches – and re-run the algorithm to add the missing ones if no optimal solution was found – and an improved ordering of variables to search the tree in a more efficient way. Finally, they also include the idea of limited discrepancy search (Harvey & Ginsberg, 1995), that consists in searching first over the nodes that agree more with a heuristic solution and iteratively moving to nodes that have more discrepancies with the heuristic. This algorithm proved to be faster than the original BC. It also got very competitive results for instances with under 20 bins in the solution, but above that, the branch-and-cut-and-price algorithms – which we review next – seem to perform better.

Parallel to these exact branching algorithms, some researchers opted for using linear programming methods to solve the problem. The main formulation of the problem is due to Gilmore & Gomory(1961) and was presented for the cutting stock problem. The Chapter 2 Literature Review 13 difference between the bin packing problem and the cutting stock problem is the assort- ment of the small objects. In the bin packing problem, they are strongly heterogeneous, while in the cutting stock problem they are weakly heterogeneous, meaning that one can expect a large number of repetitions of the same item type.

The variables of this model represent all the possible valid patterns for packing (or cutting) the items. The constraints just impose then that the demands for the items are met. Of course, the number of such variables is too large to be considered in practical scenarios. To overcome this, Gilmore & Gomory(1961) proposes to drop the integrality constraints on the patterns and solve the problem with only a few of them. This reduced problem is called restricted master problem. After that, new variables are added itera- tively when they are found to be good candidates to reduce the objective function – this process is called column generation or pricing –. At some point, the pricing problem will not produce more interesting patterns and the process finishes, yielding an optimal so- lution for the master problem. However, as the master problem is an LP relaxation, the solution might be fractional. In the original work this issue is not resolved and instead the authors point to rounding procedures to find feasible solutions based the fractional solution. In general, the lower bound provided by this relaxation is very strong and, in many cases, it is within 1 unit of the optimal solution. Recall that the objective involves the number of used bins and it is, therefore, integer. If this happens, the instances are called IRUP (they have the so-called integer round-up property). Nevertheless, this is not the general case and there are non-IRUP instances as well and they have been the focus of research, see for example Rietz et al. (2002). To address the problem of finding a valid integer solution to the problem, two further techniques are used. The first one consists in strengthening the formulation of the col- umn generation with cutting planes. If this is not enough, some variables will be still fractional and the technique to follow is branching on these variables. These kinds of algorithms are called branch-and-cut-and-price and have been shown to be quite efficient for this problem, but not straightforward to implement. Both the branching strategies and the cuts applied to the formulations have been thoroughly discussed in the literature, and hence we refer to the review by Delorme et al. (2016), that references a collection of works in the area and to the technical report by Belov & Scheithauer(2006), that has a gentle introduction to the underlying theory of this scheme. There are many examples in the literature of using Gilmore & Gomory’s formulation with branch-and-cut-and-price, see for example Vanderbeck(1999) or the aforemen- tioned Belov & Scheithauer(2006), both of them including different branching rules and improvements. Finally, we mention an alternative formulation for the problem, given by de Carvalho (1999). This work is based on an arc flow formulation, where patterns are represented by acyclic graphs, that is solved with a branch-and-price algorithm. 14 Chapter 2 Literature Review

2.2.2 Approximation algorithms

Another active field of research for one-dimensional problems are approximation algo- rithms. These are algorithms that can run in shorter time than exact methods and, while they do not guarantee an optimal solution, they are proven to not deviate from the optimum by more than a known ratio. A comprehensive survey of the recent devel- opments in approximation algorithms for bin packing problems and its variants is given in Coffman et al. (2013) and we limit this review to the most relevant results.

The performance of approximation algorithms is usually assessed using the asymptotic ∞ worst-case ratio. Often denoted as RA for an algorithm A, is the smallest number that guarantees that for every possible instance of the problem L, there exists a constant K ≥ 0 such that ∞ A(L) ≤ RA OPT (L) + K, (2.15)

where OPT (L) is the optimal solution for the list L and A(L) is the solution found by the algorithm A .

There are two main variants for bin packing that are relevant in this context, the off- line and the on-line. In the former, the list of items to be packed is known beforehand while in the second they need to be packed in the order they arrive. For both types, one of the classical algorithms is the First Fit, which just packs the next item in the lowest index bin it is possible to do so. If no bin can fit the item, a new one is opened. For this algorithm, Johnson et al. (1974) showed the asymptotic worst-case ratio to be ∞ 17 RFF = 10 . More recently, D´osa& Sgall(2013) has shown that the absolute worst- 17 case performance ratio is also 10 and therefore it holds that for any list of items L, 17 FF (L) ≤ 10 OPT (L). This means that, for instance, if a list of items has an optimal solution of 10 bins, First Fit will never give a solution worse than 17 and, in fact, one can infer from D´osa& Sgall(2013) and others that these worst case examples are usually difficult to find.

The offline problem has the advantage that the items are available from the begin- ning and the order of their placement can be decided. This is the basis of the First Fit Decreasing algorithm, where the items of the instance L are sorted in decreasing size prior to applying First Fit. This turns out to be very beneficial, reducing the asymptotic 17 11 performance ratio from 10 to 9 (Johnson et al., 1974).

While these proofs have been published some decades ago, the performance of ap- proximation algorithms for bin packing has still been an active research topic. For Chapter 2 Literature Review 15

6 ∞ example, the additive constant K = 9 for RFFD, that shows that for any list L, 11 6 FFD(L) ≤ 9 OPT (L) + 9 remained an open question until D´osa’swork in 2007 (D´osa, 2007).

As we have seen, there are a number of exact algorithms that work reasonably well for one-dimensional problems. If dealing with homogeneous assortments of small objects; researchers have been able to take advantage of patterns, resulting in very efficient methods. Unfortunately, these ideas can be rarely adapted to irregular shapes in higher dimensions, as we will see in the next sections.

2.3 Two-dimensional literature

Two-dimensional packing problems introduce the need for handling geometry. This is a major difficulty and it has a great impact on the methodology to solve these problems. A more detailed discussion about geometry in packing problems is given in Chapter3. Among two-dimensional packing problems, we distinguish between regular and irregular shapes. Regular shapes are in general easier to handle from a geometrical perspective, but their associated packing problems are still challenging to solve and very relevant in practice. We give a brief review of these methods before moving on to irregular shapes, a topic that will be highly relevant for our work in Chapters5 and6.

2.3.1 Regular

Regular packing problems in two dimensions include shapes which can be described by few parameters. Research on two-dimensional regular packing entails early theoretical results of packing of squares (Erd¨os& Graham, 1975), a large amount of publications in (Hifi & M’Hallah, 2009) and even some recent papers consider ellipses (Stoyan et al., 2016a). However, most literature has focused in packing prob- lems. This research field is very extensive, and not directly related to our research, so we aim to give only a flavour of some representative works in the field. A comprehensive review is given in Lodi et al. (2002).

The most studied problems in the area are the open dimension and the bin packing problems (Lodi et al., 2002). The first attempts of researchers involve extending the idea of one-dimensional bin packing pattern-based techniques to the problem. Gilmore & Gomory(1965) attempts to extend the cutting stock column generation formulation, and studies the case where patters are only allowed if they have guillotine cuts (the case when a machine would make the cuts from one edge to another, unable to stop in the middle of a large object). Hadjiconstantinou & Christofides(1995) presented a formulation for a knapsack problem involving rectangles. The key idea here was to define two binary variables for each rectangle in each possible placement point (one 16 Chapter 2 Literature Review representing the x coordinate and another the y). This formulation was solved using a tree-search algorithm. In Fekete et al. (2000a,b,c) the authors present a novel way of modelling the problem based on graphs. The authors develop classes of packing, as a way of grouping together similar packing layouts. They are represented as interval graphs and this allows them to use its properties in a tree-search algorithm that can solve to optimality some instances.

Researches have also implemented metaheuristic methods to tackle larger instances. Metaheuristics have the advantage of being very versatile and can often include more realistic aspects of the problem. An example of this is the scheduling related problem from Bennell et al. (2013) solved by a genetic algorithm. On the other hand, they can also benefit from efficient implementations to solve large instances as the 100, 000 rectangles packed in less than 20 seconds by Imahori and Yagiura’s implementation of the best fit heuristic (Imahori & Yagiura, 2010).

2.3.2 Irregular

Irregular shapes are differentiated from the regular ones by their need for more param- eters to represent their shape. For the purposes of the review, we can consider them to be simple polygons that sometimes are allowed to contain holes and, in general, are at most allowed to rotate by a finite set of angles. If we review a work that does not follow these assumptions, we will note it. The major exception to these assumptions are the phi-objects, that we review at the end of this section.

The majority of the two-dimension irregular packing research is concentrated on the open dimension problem (often called strip packing or nesting) and therefore, we focus on this problem in this review. One of the few exceptions is the bin packing problem, which has gathered recently some more attention (L´opez-Camacho et al., 2013; Martinez- Sykora et al., 2016; Abeysooriya et al., 2018).

While we provide an introduction to geometry in packing problems in Section 3.1, it is difficult to understand the developments in irregular packing without introducing first one key concept: the no-fit polygon. Consider a pair of pieces (simple polygons) p and q, with the position of p fixed. We can give a loose definition of no-fit polygon of p and q as the set (usually a polygon) that has the following two properties: its interior contains the points where, if q is placed, the two pieces overlap, and its frontier contains the points where, if q is placed, the pieces are in contact. We illustrate the concept in Figure 2.3. Chapter 2 Literature Review 17

p q NFP(p,q)

Contact Overlap Figure 2.3: On the top row, two pieces p and q, and their no-fit polygon, NFP (p, q). The placement of the pieces is determined by their lower left corner point (highlighted). On the bottom row, two examples of the usage, a con- tact position on the right (q is placed in the boundary of NFP (p, q)) and an overlapping position on the left (q is placed in the interior of NFP (p, q)).

A more formal definition as well as methods to calculate them are presented in Sec- tion 3.1.2.1. It is important to bear in mind that no-fit polygons can be pre-calculated for each pair of pieces, and that their boundaries are lines (or points). Both of these properties will prove to be the basis of many of the exact methods that we review in the next section.

2.3.2.1 Exact methods

Due to its complexity, there are only a few approaches in the literature that develop exact methods. One approach is not to solve the problem to optimality, but rather use an exact linear model to solve compaction or separation problems. The compaction problem consists in, given an feasible layout, fix some parts of features of it and solve a reduced problem that aims to improve the solution quality, while maintaining the fixed original features. Analogously, a separation problem would start from an infeasible solution, fix some parts of features and then solve a reduced version of the problem aiming to make the solution feasible, or, at least, bring it closer to feasibility according to some measure.

These models, available among others in Li & Milenkovic(1995); Bennell & Dowsland (2001) and Gomes & Oliveira(2006), define the reduced problem by maintaining a similar position among pieces and optimise the solution within that layout. To do this, they start with a packing layout and find the no-fit polygon of the pieces that are adjacent. To maintain linearity in the model, pieces are only allowed to move in a convex region, that is found with a heuristic algorithm by using the information of the no-fit 18 Chapter 2 Literature Review polygon. All of these works solve a linear model and therefore are restricted to move pieces in convex regions only. Following the same idea, Daniels et al. (1994) and later on Fischetti & Luzzi(2009) improved this model by partitioning the whole complementary of the no-fit polygon into convex regions. They introduce binary variables and big-M constraints to decide to which of those regions they are allocated. Finally, Alvarez-Valdes et al. (2013) developed a new way of defining such regions (by slicing the complementary of the no-fit polygon horizontally). This technique, together with some improvements in the branching strategies from Fischetti & Luzzi(2009), further strengthened the formulation and extended the applicability of the model. A different modelling technique is the one from Toledo et al. (2013). They discretise the layout and only allow the pieces to be located in a reduced set of points. This model can solve larger instances than the previous, but its solutions are just optimal for the chosen discretisation and an increase in its resolution has a great impact in the solving times. In Leao et al. (2016) they go a step further and, instead of a grid of points, pieces can be placed along horizontal lines, so the authors label this approach as semi-continuous.

Moving away from linear and mixed-integer linear programs, Imamichi et al. (2009) formulates the separation problem (minimising overlap) as an unconstrained non-linear programming problem, that is used within a metaheuristic approach. Aiming for a global optimum, Jones(2013) proposes to replace the pieces by a set of inscribed circles, as seen in Figure 2.4.

Figure 2.4: Approximation of a piece by inscribed circles (Extracted from Jones (2013))

Based on these representations, they solve a quadratic program to optimality, that allows for continuous rotations of the pieces. Since the circles do not cover completely the original pieces, the solution might be infeasible. If this happens, the algorithm is combined with a local search to find feasible solutions and a refinement procedure that adds circles in the conflicting areas and re-solves the optimisation problem iteratively until a feasible solution is found. This procedure is able to find global optimum, but only for problems with up to four pieces. Chapter 2 Literature Review 19

Finally, a radically different modelling technique is the one used in Bennell et al. (2015). Here, the authors dismiss polygonal representations in favour of phi-objects. These are mathematical descriptions of shapes in the form of parametric functions, that can include circular arcs and be combined to represent complex shapes (we will review them in detail in Section 3.1.1). Taking advantage of this representation, the authors present a model for the clustering of two pieces, together with a solution strategy to find local and sometimes global extrema. This technique has also been used for compacting layouts Stoyan et al. (1996).

2.3.2.2 Metaheuristics

As a consequence of the complexity of the problem, a lot of research has been concen- trated in metaheuristics (and even using manual human input, see Annamalai Vasantha et al. (2016)). A good categorisation of metaheuristic algorithms is presented in Bennell & Oliveira(2009), where they are divided between the algorithms that search over a piece sequence and the algorithms that search solutions by moving the pieces within a layout. The algorithms searching over a piece sequence rely on a constructive algorithm that places the pieces one by one according to a rule. One of the most used constructive algorithm rules is the bottom-left strategy (see, for example, an efficient implementation in (Dowsland et al., 2002)). This rule simply places each piece on the feasible position which minimises its x and y coordinates on the layout. One efficient implementation of bottom-left was developed by Burke et al. (2006a). In addition, they were able to handle pieces with holes and circular arcs.

Constructive algorithms usually have a strong dependence on the order in which they place the pieces. We illustrate this in a simple three-piece example in Figure 2.5, where pieces are packed to the bottom-left-most position available. 20 Chapter 2 Literature Review

Pieces to pack Sequence Layout

ABC

A

ACB

B

BAC

C

BCA

CAB

CBA

Figure 2.5: Layouts obtained packing the different sequences of a problem with three pieces with a bottom-left rule. The shorter packing length is given by the first packing sequence, ABC

Algorithms searching over the sequence try to exploit this fact. They consist in a heuristic acting over the possible orderings of the sequence. In Gomes & Oliveira(2002) they propose a similar constructive approach to Dowsland et al. (2002) and then apply a heuristic algorithm that performs exchanges in the placement sequence. Dowsland et al. (1998) start with a random ordering of pieces that are packed on the left side. After that, pieces are sorted by their rightmost coordinate and are packed again towards the right side. This process, called jostling, tries to simulate the shaking of a container one would do to better fit the items inside it. Oliveira et al. (2000) develops a new constructive algorithm, TOPOS, that selects the best piece to place from the list based on different criteria, such as the length or the area of the enclosing rectangles of the pieces in the partial layout. This algorithm has been later improved by Bennell & Song(2010), who used it as part of their beam search heuristic. This algorithm provides a tree search Chapter 2 Literature Review 21 framework similar to branch-and-bound based on local and global evaluations of the sequence placements.

A different strategy towards packing is to start from a initial solution and improve it by moving pieces within the layout. These methods usually start with a feasible solution and shrink the packing length, creating overlap. The problem then becomes to minimise this overlap and, if successful, the process starts again. Bennell & Dowsland(1999) uses this strategy in a tabu thresholding algorithm that aims to minimise overlap, represented as the horizontal displacement needed to separate two pieces (horizontal penetration depth). In a later work, Bennell & Dowsland(2001) introduced a compaction model in the tabu search, enhancing its results. A similar hybrid approach was developed in Gomes & Oliveira(2006). They combine a simulated annealing that swaps and moves pieces in the layout with a compaction and a separation model. Using the separation model allows them to avoid quantifying the overlap that is generated sometimes with the movements, as only feasible solutions are accepted during the optimisation.

One of the few works that does not use no-fit polygons is Egeblad et al. (2007). They quantify the overlap accurately, by computing the exact overlap area between two pieces. They propose a local search that finds the least overlapping position of a piece in the horizontal or vertical direction and escapes local minima by penalising the objective function (guided local search). Umetani et al. (2009) uses no-fit polygons to determine the directional penetration depth (similar to Bennell & Dowsland(1999)) between two pieces and tries to minimise the overlap in the solutions by means of a local search movements in horizontal and vertical directions. Again based on no-fit polygons, Imamichi et al. (2009) proposes a non-linear programming model to minimise overlap. This model has rotations fixed and to account for them, it is embedded within an iterated local search framework, that swaps and rotates pieces before minimising the overlap. Leung et al. (2012) proposes an algorithm with a local search phase based on piece swaps and Imamichi’s separation algorithm. If the local search is trapped, a tabu search is applied to escape local optima.

Martinez-Sykora(2013) presented an iterated greedy algorithm that is based in a de- struction and a construction phase. In the destructive phase tight parts of the layout are identified by means of an LP model and some pieces are removed from them; only to be inserted again in the constructive phase. The construction is handled by inserting pieces using the MIP model from Alvarez-Valdes et al. (2013). The best available results for the standard instances in the literature also use this technique and are due to a sophis- ticated Cuckoo Search algorithm proposed by Elkeran(2013). One novelty in Elkeran’s procedure is to use the information of the no-fit polygon to pre-process algorithmically some good fits for pairs of pieces, as the ones shown in Figure 2.6. 22 Chapter 2 Literature Review

Figure 2.6: Pairs of pieces matched using information from the no-fit polygon (Extracted from Elkeran(2013))

These initial clusters are used in the constructive algorithm; after that the cuckoo search looks to improve the solution, by performing movements in the layout and at- tempting to minimise the overlap based on penetration depth. More recently, Wang et al. (2017) proposed an algorithm aimed at the satellite module packing problem. The algorithm searches over the layout moving and swapping pieces according to an ant colony’s labour division scheme. While it cannot achieve higher utilisations than the best from Elkeran(2013), it seems to find better average results.

Unlike one-dimensional literature, the exact methods available for two-dimensional problems have a much more limited applicability. Nevertheless, they have proved to be a valuable component in some metaheuristic algorithms as compaction or separation procedures. From the review it is clear that the no-fit polygon has been a key devel- opment for two-dimensional irregular packing, as it is at the core of most of the exact and metaheuristic methods we have reviewed. This highlights the importance of having such a tool that, as we will see in the next section, does not have a readily available extension to three dimensions.

2.4 Three-dimensional literature

In Chapter6 we study a three-dimensional irregular packing problem and, therefore, prior research on this area is highly relevant for this work. We, once more, distinguish between the regular and irregular case. While both cases share the same objective, the handling of the geometry is critical for these problems, and the techniques used can be very different. Regular problems can exploit some of the regularities of the shapes – such as maximal spaces – that are not clearly applicable in the irregular case. Chapter 2 Literature Review 23

2.4.1 Regular

The most studied problem in regular three-dimensional packing is the container load- ing problem. This problem entails the placement of different rectangular boxes within rectangular containers. The objective can be to pack a set of available boxes in the min- imum space (a bin packing or cutting stock problem), or to put the maximum possible amount of boxes in a fixed space (a knapsack or placement problem). A comprehensive review of the state-of-the art is available in Zhao et al. (2016). Since this problem has a very strong application in the industry, many authors have included in their studies additional constraints to make the algorithms respond to practical needs. Therefore, there are many studies considering load stability, scheduling or weight distribution con- straints. Bortfeldt & W¨ascher(2013) review analyses most of the constraints often used in literature. In this section, we just give a flavour of the available techniques to solve these problems rather than a thorough review, as this topic is not directly relevant to the remainder of the thesis.

The solution approaches can be divided into heuristic and the exact methods. Among the heuristics, if the number of box types is limited, the main strategies involve wall- building or layer building. These techniques consist in creating walls or layers of types of boxes which are later on joined together to construct a full layout (George & Robinson, 1980; Bischoff & Marriott, 1990). Some researchers opt for a step by step approach that first constructs simpler blocks and then packs the blocks together (see for example Liu et al. (2011) for an example of this for the placement problem), or work with the so-called maximal spaces (Lai & Chan, 1997) originated as the layout is being built (see Parre˜no et al. (2008) for an example on the single-container loading problem). Another basic heuristic approach is the stack-building, which consist in creating ‘towers’ of items that are placed in the container in a second step. This approach is used in Gehring & Bortfeldt(1997) to solve the single-container loading, where the two-dimensional problem generated in the second step is solved with a genetic algorithm. Based on these placement heuristics, the search for better solutions is often guided by metaheuristic algorithms. In this sense, Liu et al. (2011) uses a tabu search, Parre˜no et al. (2008) a GRASP and Lai & Chan(1997) a simulated annealing. Other examples of the usage of metaheuristics are the the genetic algorithm employed in Gon¸calves & Resende(2013) or the beam search from Araya & Riff(2014).

Regarding exact methods, a few mixed integer linear program formulations for the problem are available in the literature for the different versions of the problem. Padberg (2000) models the packing of a single container by means of binary decision variables to

determine the rotation and the relative positions of the boxes and continuous (xi, yi, zi) variables for the position of the reference point of the ith box. For the bin packing problem, Hifi et al. (2010) introduces a mixed integer linear programming formulation and lower bounds for the case of identical containers and no rotations. 24 Chapter 2 Literature Review

If the problem allows to have more than one container and these are of different types, a popular approach is to generate and select container packing patterns. For example, Zhu et al. (2012) use this technique in a column generation, where the columns represent packing patterns. Since the pattern generation (their sub-problem) is still difficult to solve, their approach is to relax the overlap constraints and deal with them later by means of a heuristic algorithm.

2.4.2 Irregular

For the irregular case, the handling of the geometry is one of the most evident challenge. The lack of a mainstream tool such as the no-fit polygon available in the 3D litera- ture has caused researchers to use a variety of different methods to model the problem. These approaches have been strongly influenced by the choice of geometry representa- tion. These are discussed in detail in Section 3.1. We find three main approaches to the geometry in the literature, polygonal mesh, phi-objects and approximations and decom- positions. In general, the best choice on how to handle of the geometry still remains an open question, as no study has directly compared the main available tools.

Polygonal mesh was one of the first approaches used for packing in three dimensions. Ikonen et al. (1997) uses it in a genetic algorithm that places pieces according to a cer- tain order, orientation (from a finite set) and ‘attachment’ points. These points are the points where one piece might contact with another one, and are provided as part of the input data. Dickinson & Knopf(1998) proposed a constructive algorithm, where each piece is packed in a position so as to maximise the density of the packing. Egeblad et al. (2009) uses this representation in a general purpose algorithm which explores the solu- tion space by axis aligned movements and allowing overlapping intermediate positions. Also based on polygonal mesh, Liu et al. (2015) developed an efficient constructive al- gorithm which allows rotation of objects, based on a minimal potential energy placement.

A different line of research is the one explored in Stoyan et al. (2005), where they use phi-objects to represent the geometry (Stoyan & Yaskov, 1983; Scheithauer et al., 2005; Bennell et al., 2010). Analogous to the two-dimensional case, the phi-objects are mathematical descriptions based on parametric functions, that can consider curved sur- faces, of the objects to be packed. The theory is based on a few simple objects called primary phi-objects. There are formal mathematical descriptions for them and paramet- ric functions (called phi-functions) that can test for overlap efficiently and are capable of allowing for rotation. More complicated objects can be accurately represented and handled by combining primary phi-objects and their phi-functions. With this represen- tation, it is possible to formulate most packing problems as non-linear programs. The strength of this approach relies on their precise analytical description of the objects. Chapter 2 Literature Review 25

However, the complexity of the models makes them hard to solve. In general, for the in- stances available in the literature, the model can only be solved to find a local optimum and the resulting packing is not very successful compared to other simpler metaheuristic approaches, as we will see in Section 6.7.

A more recent development, quasi-phi-functions (Stoyan et al., 2016a) aims to sim- plify the phi-functions. These functions are simpler at the expense of introducing new parameters and losing some of the generality of the phi-functions. The key difference is that, while phi-functions have a full description of the overlapping positions of two objects, quasi phi-functions do not. In essence, given two objects and their positions, the quasi phi-function can guarantee that the two objects do not overlap if it takes a positive value, but it does not provide any information of what happens if it takes a negative value. In the work by Romanova et al. (2018), they have been used to pack non-convex polyhedra, allowing for continuous rotations. The procedure involves the generation of an initial solution and a compaction by means of non-linear programming, using quasi-phi-functions to avoid overlap.

One characteristic of phi-objects is that complicated shapes will need to combine many primary objects to represent the final shape. A similar problem appears with polygonal meshes since the more features one object has (irregularities, holes, etc...) the more faces and vertices it will need to provide a good representation. Both, the number of primary objects in a phi-object and the number of vertices or faces in a mesh have a direct impact in the computational cost of the packing algorithms using them. To overcome this, some researchers opt for approximating the three-dimensional models by discrete sets of smaller regular shapes. We review some literature using this methodology for packing problems in the following paragraphs.

One frequent way of discretising the shapes is to approximate them by cubes, or voxels, as they are usually referred to. This technique is used in Jia & Williams(2001), where they describe a simulation-based packing algorithm motivated by particle packing. In their algorithm, particles can randomly move and rotate as long as they do not overlap with each other. In a later work, this algorithm was made more computationally efficient by Byholm et al. (2009). They take advantage of the discretised space to add some computational tricks on the shape representation. This includes, for example, removing some voxels that are not going to play a role in the final packing result; doing this provides a great computational time advantage.

A related idea, slightly similar to voxels, is to represent or approximate the pieces by joining simpler shapes (but not necessarily discretising the space). In Edelkamp & Wichern(2015) they approximate shapes by that are organised in a tree structure. This representation is then used in a simulated annealing algorithm that finds 3D printing layouts of irregular objects, allowing free rotation. Cagan et al. (1998) uses rectangular solids (not necessarily cubes) for the decomposition and develops a 26 Chapter 2 Literature Review simulated annealing based on simple movements and rotations for component layout of various applications.

One important parameter affecting the decompositions or discretisations is the reso- lution, or the size of the basic units used to approximate the piece. If it is too large, the approximation will not be good and some geometrical aspects can be lost. For example, holes or concavities smaller than a voxel cannot be represented. On the other hand, if the resolution is too small, the model will require a lot of memory to be stored in the computer and the packing algorithms will be very slow. Some approaches (Cagan et al., 1998; Edelkamp & Wichern, 2015) use tree structures as a way to overcome this problem. The tree structure has very coarse representations at the top, but can get finer and finer if the features require it. If the basic volume units used are cubes, these trees are called octrees and each cube is divided in eight identical smaller cubes at each level. Voxelisation and octrees are used in a variety of applications, such as in computer graphics and simulations and are an active topic of research on their own (Baert et al., 2013; Schwarz & Seidel, 2010), including closely related variations such as chain codes (Lemus et al., 2015; S´anchez-Cruz et al., 2014).

Once again, we see that the literature for the regular and irregular case has very little in common. In addition, the irregular literature seems to be taking its first steps. While we find some works from the nineties, they are in general scarce and difficult to compare (mostly solving different instances). We will revisit this topic in Chapter6 and present some tools and solution approaches for the open dimension problem. Chapter 3

Methodology

In this chapter we review some of the methodologies relevant to our research questions. We structure this chapter in two parts. The first one, Section 3.1 is related to geometry, since the packing of objects in two and three dimensions has a strong geometric com- ponent to ensure that the shapes of the packed objects are considered. In the second part, Section 3.2, we review optimisation methods, as our three research papers involve solving different optimisation problems.

3.1 Geometry representations

When working in two or three dimensions the geometry needs to be represented in some way. Real world objects are often different from the mathematical idealizations that are easy to handle by computers, so researchers need to find a trade-off between accuracy and complexity when representing them. In the case of regular shapes such as rectangles, circles or spheres, parametrised mathematical representations of the objects are common, however, for irregular shapes there is a range of techniques that can be applied and we aim to summarise here. For the two-dimensional case, there is a comprehensive review of geometry for packing in Bennell & Oliveira(2008), but no three-dimensional equivalent seems to be available in the literature.

Each representation has a strong influence in the tools that are used to model the problem and, especially, on how to avoid overlap between pieces. In this section, we review the three main representations of objects – phi-objects, polygonal representations and discrete representations – and the tools associated with them.

27 28 Chapter 3 Methodology

3.1.1 Phi-objects

The phi-objects are analytical descriptions of mathematical objects. Formally, they are defined as point sets which have the same homotopic type as their interior (Bennell et al., 2010). This wide definition excludes only constructions of objects with peculiar features, such as isolated or removed points. However, in practice, the theory is based upon primary phi-objects. The primary phi-objects are objects such as circles, spheres, rectangles, regular polygons, polyhedra and convex polygons; and are always defined by 2 a mathematical formula. For example, a disc in R with is centre in (0, 0) and radius r can be defined by the following formula:

2 2 2 2 D(0, 0) = {(x, y) ∈ R : x + y − r ≤ 0} (3.1)

Its complementary D∗ is also a phi-object,

∗ 2 2 2 2 D (0, 0) = {(x, y) ∈ R : x + y − r ≥ 0} (3.2)

By means of intersection, union and complementary operations, phi-objects can be combined into composed objects. For example, if we consider a disc D1, the complement ∗ of a smaller disc D2 and a rectangle R, we could represent the Roundel, the London ∗ Underground logo, by means of intersecting D1 with D2 and performing the union with R. We illustrate this in Figure 3.1.

A notable result by Chernov et al. (2012) is that any 2D object that can be completely described by circular arcs and linear segments can be decomposed with four basic objects. These basic phi-objects are convex polygons, circular segments, hats and horns. The hats are circular arcs with two tangent lines on their ends and horns are composed of two circular arcs, one convex and one concave, and a line. These basic objects are shown in Figure 3.2.

A great advantage is that phi-objects have also been developed for 3D by Scheithauer et al. (2005); Stoyan & Chugay(2012) and are based in the same theoretical foundation that supports the 2D phi-objects. For a pair of phi-objects, it is possible to derive the so-called phi-functions. These functions have a negative value if the objects overlap, are zero if the objects are in contact but not overlapping and are positive if they are apart from each other. For two dimensions, there are available in the literature some publications with indications on how to obtain them for primary phi-objects, see for example Stoyan et al. (2001) or Bennell et al. (2010). Let us show an example of phi-function. If we consider two discs, D1 and D2 with radii r1 and r2 respectively, a possible phi-function is given by the following equation (Stoyan et al., 2001) Chapter 3 Methodology 29

Figure 3.1: Creation of more sophisticated phi-objects by composing various primary phi-objects.

2 2 2 Φ(x1, y1, x2, y2) = (x2 − x1) + (y2 − y1) − (r1 + r2) (3.3)

For a fixed value of (x1, y1), Φ(x1, y1, x2, y2) = 0 describes a circle centred in (x2, y2)

with radius r1 + r2. It is easy to see that these are the positions where one could locate

the centre of D2 and the two discs would touch but not overlap, in accordance with the definition of phi-function.

A special type of phi-functions are the normalised phi-functions. They have the prop- erty that, when the objects are not overlapping, the value of the function indicates the Euclidean distance separating the objects. If we revisit the example with the two discs, an alternative phi-function that is normalised would be (Stoyan et al., 2001)

p 2 2 ΦN (x1, y1, x2, y2) = (x2 − x1) + (y2 − y1) − (r1 + r2) (3.4)

One interesting feature is that the phi-functions can include more detail, for example they can allow for rotation of the pieces. Particularly interesting is their possibility to support continuous rotations of the phi-objects (Chernov et al., 2012; Stoyan & Chugay, 30 Chapter 3 Methodology

c Figure 3.2: Basic objects: (a) convex polygon K, (b) circular segment D, (c) hat H, and (d) horn V. (Extracted from Chernov et al. (2012))

2014). This means that for two objects, a single evaluation of their phi-objects depend- ing on their reference coordinates and rotation angle would indicate whether they are overlapping or not.

The great advantage of phi-functions is that they provide a very accurate mathemat- ical description of objects and are fast to evaluate to check the intersections between simple objects. However, their complexity strongly depends on the shape of the object. By this we mean that decomposing an object in a series of phi-objects can be very diffi- cult and, if achieved, the resulting composed phi-object can be very challenging to deal with. Furthermore, the calculation and evaluation of the associated phi-functions would also get increasingly difficult.

Based on the same representation of objects, the quasi-phi-functions (Stoyan et al., 2016b) are a family of functions that offer similar properties while having simplified form.

Let us consider two arbitrary phi-objects, with placement parameters u1 and u2 (in our previous example these would have been u1 = (x1, y1) and u2 = (x2, y2)). The formal 0 0 0 0 definition is that a function Φ (u1, u2, u ) is a quasi phi-function if max Φ (u1, u2, u ) is u0∈U a phi-function (Romanova et al., 2018). One immediate consequence of this definition is that, if for some positions of the phi- 0 0 0 objects, u1 and u2, we are able to find some u such that Φ (u1, u2, u ) ≥ 0. This means 0 0 that the value of the phi-function max Φ (u1, u2, u ) is also greater or equal to zero and u0∈U 0 0 0 therefore the objects do not overlap. However, if we find an u such that Φ (u1, u2, u ) ≤ 0 it is not guaranteed that the objects are overlapping, as there might be other values in U that make the quasi phi-function positive. When constructing one quasi phi-function, one must define what these parameters u0 do, as well as what their domain U is. For example, in Romanova et al. (2018) they are used to determine the parameters of a half-space and the quasi phi-functions for convex polyhedra are then derived on the Chapter 3 Methodology 31 basis that if the polyhedra do not overlap, each of them should be on a complementary half-space.

While introducing more parameters in the optimisation can make the problem more challenging, the resulting quasi phi-functions have simpler forms than phi-functions, so they can extend the scope where phi-objects can be applied. Despite being a very recent development, they have already been used successfully with objects such as ellipses and ellipsoids (Stoyan et al., 2016a); and in three dimensions with convex polytopes (Pankratov et al., 2015) and general polyhedra (Romanova et al., 2018).

3.1.2 Polygonal representations

Polygonal representations, often called polygonal meshes in 3D, are a boundary rep- resentation of geometric objects. They consist in a collection of linear edges in 2D or planar faces in 3D that describe the boundary of the object. They are the most popular representation technique and, since the development of the no-fit polygon (Art, 1966), they have been extensively used in the 2D irregular packing problems.

Both polygons and polyhedra base their representations on a list of vertices, but this is not enough. For two-dimensional polygons, the order of the vertices is also required, assuming they are connected by line segments between them. This ordering also provides an orientation that is useful for calculating intersections or no-fit polygons, as we will detail later. In the case of polyhedra a list of faces is also required. Each face is determined by three or more vertices from the vertex list, provided that they are not aligned. Along with the faces, a normal vector is required, to determine what side corresponds to the outside of the polyhedron and which one is the inside. We show an example of a polygon and a polyhedron in Figure 3.3.

Figure 3.3: Example of a polygon (left) from the shapes0 instance (Oliveira et al., 2000) and a polyhedron with triangulated faces (right) from a instance in Stoyan et al. (2005) 32 Chapter 3 Methodology

Neither of these concepts is unequivocally defined in the literature and sometimes au- thors include different assumptions. For example, polygons are usually required to have non-intersecting edges (simple polygons), but often they are allowed to contain holes. This is usually driven by the needs of the geometric tools using to perform the packing optimisation.

The rotation of both polygons and polyhedra can be calculated by applying trigono- metric transformations to the list of vertices, maintaining their order or face description intact. While this is a simple procedure mathematically, its implementation on computer codes often leads to instability, due to the float point arithmetic inaccuracies introduced by the trigonometric operations. While accentuated by them, this problem is not ex- clusive to rotations. Most algorithms involving direct geometrical operations will suffer from inaccuracies. This was illustrated for example by Hoffmann(1989), where they offer a discussion about accuracy of geometric computations and provide some examples of errors arising with line intersections.

3.1.2.1 Avoiding overlap in two-dimensions

If we look at the packing literature using polygonal representations, there are two main approaches to use for deciding where to place the pieces geometrically and to deal with the overlap constraints. The first one is to use direct trigonometry to test and sometimes quantify whether two pieces overlap. The other one is to use the more sophisticated no- fit polygon, a construction that provides the complete description of the points on the space where the two pieces overlap.

Direct trigonometry Using direct trigonometry there are three main approaches in the literature to deal with overlap. The first one is to perform a test between two pieces that indicates if they are in an overlapping position or not. The second one is to compute the full intersection between two pieces. This, of course, contains the previous approach but also can provide more information, such as the overlap area and even the shape. The third option is somehow an intermediate approach that consists in estimating the overlap amount, without computing explicitly the intersection. We review the three of them in the following paragraphs.

The question of whether two polygons are in an overlapping position or not, can be answered by performing a series of segment intersection and point containment tests. Typically, the implementations start from easier tests (overlap of bounding boxes) to discard the trivial cases first, before performing the more costly segment intersection tests for each pair of sides. In Bennell & Oliveira(2008) the authors refer to a test for Chapter 3 Methodology 33 overlap based on this idea. This procedure makes use of the point containment test from Preparata & Shamos(1985) to evaluate if the vertices of one polygon are contained in the other and presents an edge overlap evaluation based on the D-functions Konopasek (1981) to evaluate the edge intersection. Such test can be evaluated fairly efficiently, but when used in a packing algorithm it would need to be complemented with a mechanism to decide the positions to be tested and a way of quantifying and resolving overlap if it existed. Regarding this last topic, we review overlap measures next.

We examine now the procedure to find the intersection of two polygons that will yield either an empty set, one or multiple polygonal areas of intersection, see Figure 3.4. There are other so-called degenerated cases, where the output could contain single points or lines due to coincident edges or vertices; however algorithm implementations usually work with a certain tolerance level to avoid such cases. A fairly recent algorithm for this operation is given in Kui Liu et al. (2007) and it works as follows. Given that two polygons have the same orientation (eg. counter clockwise), their intersection (and, in a very similar fashion, their union and difference) can be calculated based on the intersection points between their edges. Using the orientation, each intersection point can be classified as an entering or exiting point, depending on whether the edge of one polygon is entering or exiting the other polygon. Based on this, the edges of the output polygon can be determined. We illustrate this process in Figure 3.4.

Figure 3.4: Intersection of two non-convex polygons: (a) polygons to intersect with their orientations, (b) classification of intersection points, (c) edges of resulting polygons and (d) resulting polygons 34 Chapter 3 Methodology

This operation is quite common in the field of Computational Geometry and Geo- graphic Information Systems (GIS), where usually a number of polygons (subject poly- gons) are intersected with another one (clip polygon) and it is called clipping. For this reason, implementations of clipping are available in software libraries and have been ex- tensively revised. In Chapter5 we make use of the C++ boost library for this purpose that is based on the algorithm presented in (Dobkin & Kirkpatrick, 1985).

Let us now take a look at other ways of measuring overlap between two pieces without calculating the explicit intersection. In this category we find a few methods, including calculating the intersection area (not the shape) or the penetration depth. Regarding the first approach, Egeblad et al. (2007) proposes an algorithm to calculate the intersec- tion area of two polygons. They successfully incorporate this procedure in the 2DNest algorithm as we have seen in Section 2.3.2. The key idea is to calculate the areas between pairs of edges of the two polygons. Since the algorithm can be extended to 3D, we give a more detailed explanation at the end of this section.

The penetration depth for two polygons is defined as the minimum distance one of them should be displaced to avoid overlapping another one. This measure is sometimes restricted to horizontal or vertical directions (Bennell & Oliveira, 2008) in order to speed up its calculations during heuristic algorithms.

No-fit polygon An alternative approach to deal with overlap is to use the no-fit polygon (NFP). Let us 2 denote two polygons by P and Q. Their reference points, (Px,Py), (Qx,Qy) ∈ R , are a point in the plane (often one corner of the bounding box of the polygon) that determines the position of the polygon (if the point is translated from its original position, so is the polygon). If (Px,Py) is fixed, the no-fit polygon for P and Q, NFPPQ, is defined as a set such that:

• If (Qx,Qy) belongs to the interior of NFPPQ, P and Q overlap,

• if (Qx,Qy) belongs to the frontier of NFPPQ, P and Q are in contact but do not overlap and,

• otherwise, P and Q do not touch or overlap.

A more informal (and perhaps more intuitive) definition could be that NFPPQ is the polygon whose boundary results from the trajectory generated by the reference point of Q when P is fixed and Q slides around it. However, and despite its name, the no-fit polygon is not always a polygon in the traditional sense, as it might contain holes and have isolated or removed lines or points. For example, it might have holes in situations where this sliding operation would be interrupted, for example if the polygon P has hole Chapter 3 Methodology 35 large enough to accommodate Q, or a concavity with a ‘narrow entrance’. We illustrate such case in Figure 3.5.

Figure 3.5: (a) Polygons P and Q with reference points highlighted, (b) inner loop of NFPPQ, (c) outer loop of NFPPQ, (d) NFPPQ

It is easy to imagine the case where if Q fitted exactly through the concavity of P in Figure 3.5, the no-fit polygon would have a removed line; and if Q fitted exactly inside a ‘hole’ but was not allowed to move around, the resulting no-fit polygon would have a removed point, rather than a hole.

Obtaining the no-fit polygon for two convex shapes can be done easily, as the re- sulting polygon is made from the edges of the original polygons. Cuninghame-Green Cuninghame-Green(1989) describes a simple algorithm where the edges of the two polygons are ordered by their slope and joined together in that sequence, obtaining the no-fit polygon (called Configuration Space Obstacle (CSO) in that work). Naturally, this procedure can be extended to non-convex polygons if they are decomposed into convex parts and their pairwise no-fit polygons are joined back together. However, the complexity of the decomposition itself added to the complexity of calculating many no-fit polygons might not be computationally worth it.

For this reason, researchers have developed a few methods over the years to calculate no-fit polygons for general polygons. One of the main approaches to its calculation is based on Minkowski sums. For polygons P and Q, we define their Minkowski sum as 36 Chapter 3 Methodology the set of points given by

P ⊕ Q = {p + q : p ∈ P, q ∈ Q} (3.5)

The relationships arising between the Minkowski sum and the no-fit polygon where first explored in Ghosh(1991) and revised and implemented in an algorithm later by Bennell et al. (2000). The key idea here is that the resulting edges of the Minkowski sum come, fully or partially, from the original edges of the input polygons. Bennell’s addition in Bennell et al. (2000) includes working first with the convex hull of one of the shapes and then deal with the concavities, to achieve a more streamlined algorithm. In a further work, Bennell & Song(2008) improves this algorithm and adds support for identifying holes to this technique.

The other main approach, called orbital method, entails the construction of the no-fit polygon using trigonometric operations that mimic the sliding of one polygon around the other one. This idea first appeared in Mahadevan(1984) and was later extended and implemented by Burke et al. (2007), adding support for holes and interlocking concavities and even to pieces with arcs Burke et al. (2010a).

The difficulty of the implementation of these algorithms has been a barrier for many researchers to work on this area, however an open source tool with the state-of-the-art methods has been recently made available by Wauters et al. (n.d.).

To the best of our knowledge, there are no extensions of the no-fit polygon in three dimensions available in the literature. Due to the large impact it has had in two- dimensional packing problems, one could expect the same in three dimensions and this is the motivation for our second research chapter, where we develop a discrete approxi- mation of this concept in 3D.

3.1.2.2 Avoiding overlap in three-dimensions

In this work, when we refer to polyhedra we use the definition of simple polyhedra given in Preparata & Shamos(1985). Polyhedra are the three-dimensional objects defined by plane polygons (commonly called faces). The edges of these polygons should be adjacent to exactly one other edge of the polyhedron, such that the resulting set of faces 3 divides R in two disjoint sets, the interior and exterior. In this definition, we assume that a polyhedron includes both its faces (boundary) and its interior. Without loss of generality, we can assume that the faces are convex and, indeed, many algorithms assume that they are triangular. Note that, as long as faces are simple polygons, they can be triangulated in linear time by a few algorithms, see for example Chazelle(1991). Since triangular faces simplify many proofs, this is a standard assumption in the literature and as some of the algorithms we will cite in the following paragraphs will use it. Chapter 3 Methodology 37

As mentioned earlier and unlike in the two dimensional case, there is no equivalent of no-fit polygon in three dimensions. Nevertheless, in the area of computational geometry, researchers have developed algorithms to find the related concept of Minkowski sums of polyhedra, but these have not been used for packing. A reason for this gap might be related with the difficulty of the problem. Finding a Minkowski sum is a challenge in its own right and there are several algorithms to find them considering only the case of convex polyhedra (Bekker & Roerdink, 2001; Fogel & Halperin, 2007). To compute the Minkowski sums of non-convex polyhedra a previous step is to decompose them in convex parts first, find the pairwise Minkowski sums and then perform the union of all of them Hachenberger(2009). These complex operations, together with the challenge of finding good packing layouts itself, can become impractical for even small sized problems.

In this situation, the logical path to follow is to focus in other related calculations to avoid pieces overlapping. Similar to the two-dimensional case, we have the following options: just detecting the overlap, calculating some measure of the overlapping volume or computing the full intersection between two polyhedra.

The simplest of them is to test if two given polyhedra overlap or not. In computational geometry literature, this called interference detection. On an interesting note, the related concept of collision detection is used when the polyhedra are moving over a trajectory; in such cases there are a number of specific options available to analyse the interferences over time and, since they are well beyond the scope of this thesis, the reader might refer to the review by Jim´enez et al. (2001).

The detection of overlap between two convex polyhedra can be performed in linear time (see Dobkin & Kirkpatrick(1985)); however, for the general case, the test becomes more challenging. Similar to the test presented in Bennell & Oliveira(2008) for polygons, the overlap detection can be reduced to a series of tests growing in complexity that allow us to discard quickly the simplest situations for non-intersecting polyhedra. Such test could start by examining bounding box intersections and containment, but it would entail a large number of intersection tests in a worst case, adding up to a complexity of O(nm), being n and m the number of edges of each polyhedra (Thomas & Torras, 1994). We review now the problem of calculating an overlapping measure for two polyhedra. To the best of our knowledge, there are not overlap measures for polyhedra other than the actual volume of overlap. Naturally, penetration depth could be defined in the same way as in 2D, but no packing algorithm seems to have used it.

Nevertheless, based on the same theory as the two-dimensional case we mentioned from Egeblad et al. (2007), it is possible to calculate the overlapping volume of two polyhedra without explicitly calculating their resulting intersection. In Egeblad et al. (2009), in the context of a packing algorithm, they present a procedure to perform this calculation for two general (non-convex) polytopes. It starts by classifying the faces as positive, negative or neutral. A face is positive (negative) if its points plus a small enough 38 Chapter 3 Methodology positive (negative) displacement on the x axis are in the interior of the polyhedra. A face is neutral if it is neither positive nor negative. Once this distinction has been made, the volume can be calculated by finding the volumes of the regions between the two faces of the two polyhedra and adding them when the faces have different sign and subtracting if when the faces have the same sign. While we described here the three-dimensional version, this algorithm can work with general polytopes, in any dimension.

Finally, we refer to the problem of computing the actual intersection of two polyhedra. Due to its practical applications, especially with the rise of computers and computer graphics, there are a number of algorithms to compute it for two convex polyhedra, an operation that can be performed in linear time (Chazelle, 1989). However, if the polyhedra are not convex, the problem becomes more challenging. In Mehlhorn & Simon (1985) they present an algorithm to compute the intersection of two polyhedra, one general and one convex. Their algorithm is based on finding a solution to the support problem and then converting it into a solution to the intersection. The support problem consists in finding at least one point for each edge-face intersection that occurs between the two polyhedra (this problem was already the basis in Hertel et al. (1984) for finding the intersections of convex polyhedra). After the support problem is solved, its solution is used to find the intersection of the two polyhedra. Each point of the solution of the support problem belongs to the intersection of one edge e from one polyhedron with a face e e f from the other polyhedron. The edge e is adjacent to some faces, namely, f1 , f2 ,... e e By finding the intersections of the faces f1 , f2 ,... with the face f one can start finding the edges and vertices that will be in the intersection of the boundaries of the two. Using this information, and analysing the edges of the original polyhedra, the intersection can be fully constructed. This algorithm runs in complexity O ((n + m + s) log (n + m + s)), where n and m are the number of edges of the intersecting polyhedra and s is the number of edges of the resulting intersection.

If both polyhedra are non-convex, the easiest option is to decompose one or the two of them into convex polyhedra and check their pairwise overlaps. To directly check the overlap between a pair of non convex polyhedra, we refer to the implementation by Thomas & Torras(1994). This work proposes an efficient implementation (within the same complexity, O(nm)) of the overlap detection that, in many practical situations would perform better than the worst case bound.

3.1.3 Discrete representations

Discrete representations work by approximating the original pieces by a discrete set of simple shapes. We distinguish between two major approaches, depending on whether the container is also discretised or not. If the container is discretised, we talk about raster methods. However, if the discretisation affects only the pieces, we call it shape decomposition. Chapter 3 Methodology 39

Raster methods In two dimensions these representations are called pixel or raster representations. The most common shape used as building block are squares (pixels). If all the squares are equal, the pieces can be just represented by a matrix with binary values, as we see in Figure 3.6.

Figure 3.6: Irregular piece represented as a binary matrix. (Extracted from Bennell & Oliveira(2008))

In Segenreich & Faria Braga(1986) they bring this representation a step further and define some pixels to be the boundary of the polygon. In the matrix, they represent them by a 1, while the interior of the piece is represented by a 3 and empty spots by 0. See Figure 3.7 for an example.

Figure 3.7: Raster representation with boundary definition. (Extracted from Segenreich & Faria Braga(1986))

While this is an interesting perspective, it remains unclear how to determine what pixels conform the to contour of a piece as, in general, the intersection of a pixel with a polygon could contain all three, the interior, the contour and the exterior of the piece.

Some researchers go even further and also replace the simplicity of the binary code by an integer code to include more information relevant for packing. In Ramesh Babu & Ramesh Babu(2001), the authors use values to indicate the number of pixels one piece has to be moved to its right to resolve the overlap in sheets with defects. 40 Chapter 3 Methodology

Figure 3.8: Pixel representation with information about overlap. (Extracted from Ramesh Babu & Ramesh Babu(2001))

An example of this is shown in Figure 3.8, where one container with defects (holes) is displayed with the overlap information included. Note that this is a very similar concept as the penetration depth reviewed for polygonal representations.

The three-dimensional versions of pixels, called voxels, are common in computer graphics. They are a natural extension for pixel representations and the same prop- erties hold but with three-dimensional matrices. To our knowledge, there are not any extensions of more sophisticated raster representations with packing information such as the one from Ramesh Babu & Ramesh Babu(2001) to three-dimensions, probably due to the high memory costs this would entail. Nevertheless, those methods would be straightforward to generalise to higher dimensions.

In the literature, researchers have tried to avoid the high memory cost of fine ap- proximations by using tree structures. These structures (Baert et al., 2013; Schwarz & Seidel, 2010), often called octrees, follow a hierarchical order and only go into detail in the small parts of objects.

These representations, as we see in Figure 3.9, use different resolutions and only refin- ing the representation when necessary, potentially leading to very good representation quality for a fraction of the memory needed.

Avoiding the overlap in pixel / voxel representations can be done in two steps. The first one is to check if the bounding boxes intersect. If they do, the second step is to identify the part of the matrices corresponding to this intersection and compare them element-wise. The pieces overlap only if they have a pixel / voxel in the same place. This same test, or similar ones, can be used with all the representations that have an underlying grid, such as octrees. Chapter 3 Methodology 41

Figure 3.9: Example of a model represented by a voxel octree. (Extracted from Schwarz & Seidel(2010))

A contribution of this work is the no-fit voxel, a tool similar to the no-fit polygon adapted to three-dimensional raster representations. We give a precise definition and examples of its usage in Chapter6.

Shape approximations Approximation of shapes by collections of simpler ones is a common technique to tackle geometrical problems and, therefore, it is not surprising that it has also been used to deal with packing problems. If the approximations overestimate the piece are sometimes called coverings. It is relatively common to find decompositions and coverings of shapes into circles or spheres, especially if the aim is to include rotations. An example of circle coverings of polygonal shapes is presented in Rocha et al. (2014). A similar idea, not aiming to represent the pieces themselves but to serve as an intermediate step is presented in Jones(2013). In this work, the pieces are represented by coarse circle approximations that do not cover in full the original shapes. If, because of this scarce covering, the solution yields any infeasibility; the representations are made more accurate to avoid it and the problem is solved again. In three dimensions, there is an example in the work of Edelkamp & Wichern(2015), that uses a -tree covering in order to solve a packing problem with rotations.

3.2 Optimisation

In this section we focus on the optimisation aspect of the thesis and we review a few methodologies that are appropriate for our purposes in the thesis. Before digging into the solution methods and algorithms available to solve the problems proposed in our introduction, it is worth taking a moment to consider their complexity and we do so in Section 3.2.1. A review of the solution techniques follows in Sections 3.2.2 to 3.2.6. 42 Chapter 3 Methodology

3.2.1 Complexity

Complexity theory finds its basis on complexity classes for decision problems. If a problem can be solved in a polynomial number of steps, i.e. the number of steps it takes to solve it is bounded by a polynomial function of the input size, it belongs to the class P and we say it can be solved in polynomial time. If given a solution for a decision problem its correctness can be verified in polynomial time the problem belongs to the complexity class NP (Non-deterministic polynomial). Whether P is equal to NP or not is still one of the most famous open problems in this area.

A further complexity class is NP-Hard. Unlike P and NP, the NP-Hard class contains search problems (not only decision problems) and, in particular, optimisation problems. The NP-hard problems are informally defined as being problems at least as hard as it gets within NP. This means that if one can find an algorithm to solve any NP-hard problem in polynomial time, then one could solve all NP problems in polynomial time, just by using this algorithm and another polynomial time algorithm to transform the input from one problem to the other. We must clarify that not all problems in NP are NP-hard (unless P = NP) and not all the NP-hard problems are in NP (for example, optimisation problems); if a problem is both in NP and NP-hard, it is labelled as NP-complete.

The optimisation problems that we consider in this thesis are, in essence, a bin packing problem (Chapter4) a problem related to set covering in the plane (Chapter5) and a three-dimensional strip packing problem (Chapter6). All of these three problems have been shown to be NP-hard before. Let us give a hint of the reasons why they are in this problem class:

• Bin packing problem: The classical form of the decision problem associated with bin packing can be reduced to the partition problem and therefore is NP-Complete (Garey & Johnson, 1979), thus the associated optimisation problem is NP-Hard.

• Set covering in the plane: Fowler et al. (1981) defines the planar geometric covering problem as the decision problem that determines if a given set of geometric objects can be located in the plane in such a way that it can cover completely another (possibly disconnected) set of points. In the same work, the authors prove that this is an NP-complete problem. This decision problem is a special case of our problem in Chapter5 making it, therefore, NP-hard.

• Strip packing problem - The decision problem associated with strip packing – whether a given set of pieces can be placed in a given container or not – is NP- hard (Fowler et al., 1981) and, therefore, so is the strip packing problem.

It must be noted that NP-hardness does not imply directly that these problems can- not be solved in a reasonable time for reasonable instance sizes. This is perhaps best Chapter 3 Methodology 43 illustrated by Pisinger’s article (Pisinger, 2005), that argues that all instances of the literature for knapsack problems are ”easy” for current techniques and there is a need to propose more complicated instances. This valid reasoning raises a point for analysing exact methods first in relation to the instance sizes that are needed to solve. For this reason, when possible, we will first construct mathematical models for our problems before proposing heuristic solutions.

3.2.2 Integer Linear Programming Models

Most of the problems in the field of cutting and packing have a strong combinatorial optimisation basis. This is evident in the classical problems, such as bin packing or the knapsack problem, that can be usually formulated as integer linear programs (ILP). In higher dimensions, certain shape representations allow us to continue using such models. This is the case for polygons and polyhedra. These representations are bounded by lines or planes and allow us to write linear constraints. It is also possible to use them with raster representations since they can be represented by binary matrices. The key is to have a preprocessing that can linearise the intrinsic non-linear part of finding overlap (edge intersections, etc.). In two dimensions this can be achieved by the no-fit polygon (Fischetti & Luzzi, 2009; Alvarez-Valdes et al., 2013). Another option is to discretise the space and use decision variables to place the pieces on the grid. This can work either with polygons, as in Toledo et al. (2013) or with discrete representations such as the one we present in Chapter6. An important observation is that while these models based on discrete representations can be, at least theoretically, solved to optimality they do not necessarily return an optimal solution to the original problem. This is the case in both in Toledo et al. (2013) and in Chapter6, where the solution remains optimal only for the grid resolution being used and coarse resolutions might yield, in fact, poor solutions for the original problem.

Formulating a problem as a linear program or as an integer linear program is very con- venient, since there is a significant body of literature devoted to their solving methods. Linear programs can be solved in polynomial time (by the ellipsoid method (Khachiyan, 1979), for instance). Unfortunately, this cannot be extended when one imposes integral- ity constraints in all or some of the variables. Not surprisingly, there are well developed techniques to solve such problems. In the following paragraphs we summarise a few of the key techniques for solving these problems. For a more detailed read, we point to the excellent historical review of Cook(2010).

Intuitively, the most direct approach is to enumerate all the possible solutions, which is possible, but notably impractical. The next step is to take advantage of the solution of the linear relaxation of the problem. If one integer variable has a fractional value, two new problems are generated, constraining the variable to be larger (smaller) than the closest larger (smaller) integer. This is called branching and is the basis for a range 44 Chapter 3 Methodology of different solving techniques. Each of the subproblems generated is called a node. The optimal solution will be in (at least) one of these nodes, so we can apply branching again and generate further nodes and, subsequently a tree structure.

The most popular technique for exploring this tree is branch-and-bound. This tech- nique, originally developed for the travelling salesman problem (Eastman, 1958; Little et al., 1963) consists in generating a branching tree, whose size is then reduced by ap- plying a ‘bounding’ procedure in the different branches. It uses lower bounds for the problem in each of the tree branches, in order to avoid exploring some of the nodes in the tree.

Another possibility is to find the solution of an ILP is to solve its linear relaxation and, if the solution is fractional, add a constraint (called cut) that would still be satisfied by the optimal (integer) solution, but not by the fractional solution found. This approach was first developed by Dantzig et al. (1954), again in an attempt to solve the travelling salesman problem. In a later work, Gomory(1958) described a full method to iteratively add maximally violated cuts to the original problem until an integer solution is obtained, and proves that it can be achieved in a finite number of steps.

Branch-and-bound and the cutting plane method can be combined in what is called branch-and-cut. This idea, first introduced by Markowitz & Manne(1957), has proven to be a very successful approach and in fact has been responsible for major breakthroughs in solving the travelling salesman problem, as the one published by Crowder & Padberg (1980).

Nowadays, all these techniques are implemented in state-of-the-art integer program- ming solvers such as IBM’s CPLEX and we take advantage of these implementations to solve our mathematical models in Chapter4 and Chapter6.

Nevertheless, as we have stated in Section 3.2.1, the problems we tackle in this thesis are all NP-hard; unless we find an algorithm that proves that P = NP, our exact current methods will fail sooner or later as instance sizes grow. As we will show in the following chapters, in our problems this happens with small instances and therefore we will require heuristics and metaheuristics to find ’good enough’ solutions in a reasonable amount of time.

3.2.3 Heuristics

A heuristic algorithm is a procedure to find a solution for an optimisation problem, without the guarantee that the solution obtained is optimal. While they can largely vary on their design and sophistication, they are usually characterised for being quick and often draw inspiration from how we would solve the problem manually. Chapter 3 Methodology 45

They are a suitable approach when exact methods cannot find a feasible solution, or if the problem is too complex to be modelled by an exact method. Furthermore, they are also useful to provide initial approximations, upper bounds or as a method to tackle problems where time is more important than quality. Since they are problem specific, it is difficult to give a full review of them. Instead, we have categorised them and give some examples relevant to the problems we consider in this thesis.

3.2.3.1 Constructive algorithms

A constructive algorithm is a process to build from scratch a feasible solution for a given problem. Some of the classical algorithms we have reviewed in Chapter2 such as the first-fit decreasing algorithm for bin packing problems or the bottom-left-corner for strip packing problems fall into this category. They are the first step in more sophisticated metaheuristics, either as initial solution (in single-solution algorithms) or as a means to generate an initial population (evolutionary algorithms). Often, their intermediate steps, which are partial solutions, are also of interest; as they can be used as building blocks for metaheuristics and hyper-heuristics.

3.2.3.2 Local search

Local search has an elusive definition, but it entails the intuitive idea of starting in a given solution and, by applying small perturbations, ”moving” to a solution in its neighbourhood that has better quality. Eventually, this process will lead to a solution whose neighbours are all of worse quality. This kind of solution is labelled as a local optimum.

Given an initial solution L, we can define a neighbourhood as a set of solutions N(L) that is constructed by applying a perturbation, that can be defined as one wishes, to L. A move is then defined as swapping the current solution by one belonging to its neighbourhood. If we only accept improving solutions, the algorithm is called hill climbing and can be performed in a number of different ways. Probably, the simplest of all would be to evaluate an objective function in all of the neighbours and move to the one with the best value. This technique is called steepest ascent (for maximisation problems, otherwise it would be steepest descent) in the literature. Another technique would be to start exploring a neighbourhood (possibly at a random order) and move to the first solution that provides an improvement with respect to the current one; this is usually called greedy ascent (or descent, when minimising). It is trivial to see that both of these techniques would end up in a local optimum. We define a solution L∗ as a local optimum in the neighbourhood N(L∗) for the objective function f if the following 46 Chapter 3 Methodology inequality holds in a maximisation problem

f(L∗) ≥ f(L), ∀L ∈ N(L∗) (3.6)

See Figure 3.10 for a schematic illustration of the optimisation process for an irregular solution landscape, where we assume that the solutions are sorted in the horizontal axis according to their proximity in terms of the neighbourhood used.

Figure 3.10: Steps of a hill climb algorithm. It starts from the initial solu- tion L0 and iteratively explores the neighbourhoods of the incumbent solution, ∗ N1,...,N4, until it arrives to the local optimum L

Local search is often a building block in metaheuristic algorithms; referring to a step where a solution is improved from its current status to a better solution or a local optimum, as we will see in Section 3.2.4.

3.2.3.3 Approximation algorithms

Approximation algorithms can be informally defined as heuristic algorithms that guar- antee a certain quality. For certain algorithms, it is possible to analyse the worst case performance of an algorithm as a ratio related to the value of the optimal solution. There exists a number of such algorithms for one-dimensional packing (we refer the reader to Section 2.2, where we have reviewed some of the work done in this area); however, to the best of our knowledge, there are not approximation algorithms dealing with irregular shapes in two or three dimensions. Chapter 3 Methodology 47

3.2.4 Metaheuristics

Despite their popularity, there is not an ubiquitous definition of metaheuristics. In fact, even the spelling (metaheuristics or meta-heuristics) does not seem to have an universally accepted form in the literature. For the definition we follow the idea of S¨orensen& Glover (2013), that presents metaheuristics as a general framework for solving problems that is problem independent. When implemented in practice, the implementation will be a heuristic algorithm itself. For the spelling, we will use metaheuristics, as it seems to be the form used by the experts on this field, such as the EURO working group on Metaheuristics or the Metaheuristics international conference (MIC).

We identify two main types of metaheuristics, the ones operating on a single solution and the ones operating on a group of them (usually called population). Single solution algorithms improve the solution iteratively in order to find an optimum. Usually, some parts of the algorithm will allow the solution quality to become worse, with the aim of exploring different parts of the solution space and find other (better) optima. On the other hand, population-based algorithms work with a group of solutions. This is more memory expensive, but it provides an advantage in many situations, for example, when solutions can be thought as having different parts that is worth sharing. Another situation is when there is a reason for obtaining more than one answer, for example when dealing with multi-objective problems. In the remainder of the section we review a few main metaheuristics, including the ones used in this thesis.

3.2.4.1 Iterated local search

Iterated local search (ILS) is a simple metaheuristic algorithm that consists in two steps: a local search and a disruption of a solution, sometimes called shake, kick or perturbation. The local search part advances from the current solution to a local optimum for the neighbourhood that is being considered. The shake is a more dramatic change in the solution, thought to move to a new different area of the solution space, where local search is applied again.

Both the local search and the shake need to be based on problem-specific information for the algorithm to be successful. The strength of the shake is an important aspect to be considered. If it simply moves to a completely different solution at random, the algorithm would effectively be a multistart algorithm. However, if the perturbation is too small, the local search might end in the same previous local optimum. A detailed review with these and other considerations about ILS is available at Louren¸co et al. (2010). 48 Chapter 3 Methodology

3.2.4.2 Simulated annealing

Simulated annealing was first introduced by Kirkpatrick et al. (1983) to solve combi- natorial optimisation problems, inspired by the behaviour of atoms depending on their temperature. The key idea is that, on higher temperatures, particles are more likely to move to non-optimal positions and, on lower temperatures, they will settle in stable configurations. Following this idea, simulated annealing defines a temperature parame- ter that starts with a non-zero value and is decreasing over time. In each iteration, the algorithm explores nearby solutions and moves to them with a certain probability. This probability depends on both the temperature and the difference in the objective function value. If the objective function increases (in a maximisation problem) then the solution is always accepted, but if it decreases by ∆, the solution is accepted with probability

−∆ t Pi = e i , (3.7)

th where ti denotes the temperature at the i iteration. An important component in simulated annealing is the cooling schedule, the part that decides how the temperature decreases over the iterations. In the original work, Kirkpatrick chooses the temperature of the ith iteration to be i ti = 0.9 t0, (3.8) where t0 is the initial temperature. But other researchers have come up with different cooling schedules, including schemes that would decrease and increase the tempera- ture depending on the performance of the algorithm, such as the packing algorithm in Dowsland(1993).

3.2.4.3 Tabu search

Tabu search is a widely used metaheuristic introduced by Glover(1989). Unlike simu- lated annealing, where neighbours are evaluated one at a time, in tabu search usually the whole neighbourhood is explored before advancing to the best neighbouring solu- tion. This movement is performed regardless of the solution improving or worsening the current objective function. By accepting worsening moves, the search is able to escape local optima and eventually arrive at better objectives.

Since non-improving solutions are accepted, there is the risk of cycling, i.e. visiting back and forth between the same solutions. To prevent this tabu search incorporates a memory element, which helps to guide the search. While there are many implementation variants, in its basic form this memory element is a list of a fixed length, called tabu list. The purpose of the tabu list is to maintain a record of recently visited solutions and forbid the algorithm from revisiting them. Usually, rather than full solutions the tabu list contains just their features. After the move is performed, its features are added to Chapter 3 Methodology 49 the beginning of the tabu list and, to maintain its length constant, the last element is removed.

Effectively, a tabu list is dynamically modifying the neighbourhood used in each iter- ation, potentially reducing its size. As a consequence of this, some good neighbouring solutions might be missed for being part of the tabu list. To avoid this situation, an aspiration criteria is usually defined, which is a criteria that allows to move to tabu solutions if fulfilled (for example, the tabu solution is the best ever found).

Tabu search can also include medium-term memory (a list of good solutions, whose attributes are encouraged) and long-term memory (information about visited solutions, with the aim of moving to unvisited areas of the solution space). Furthermore, it can have mechanisms such as strategic oscillation (that allows infeasible solutions during parts of the search, as in our implementation in Section 6.6.2. A comprehensive list of implementation options for tabu search is given in Talbi(2009).

3.2.4.4 Variable neighbourhood search

Variable neighbourhood search (shortened VNS) was introduced by Mladenovi´c& Hansen (1997). Its main idea is to have a collection of neighbourhoods. This way, if there are no improving solutions in one, another (possibly larger) neighbourhood can be explored. We implemented this technique for the three-dimensional irregular packing problem and reviewed it in detail in Section 6.6.3.

3.2.4.5 Genetic algorithms

Genetic algorithms, often shortened as GA, are certainly one of the most popular evo- lutionary algorithms; a type of metaheuristics that draw a parallelism between the evo- lution of species and the optimisation of a solution.

In a nutshell, they work with a shortened representation of the solution (problem specific) called chromosome. A group of chromosomes is generated at a initial stage and they form the so-called population. A chromosome can be decoded into a solution, and this is specific to each problem, though there are some standard encodings such as ran- dom keys (Bean, 1994). There are two operations that are defined for the chromosomes, crossover and mutation. Crossover takes a number of chromosomes (called parents) and creates new ones (called offspring) inheriting some of the parents properties. Mutation is an operation that typically adds some random modification to a single chromosome to produce a similar but different result. In each iteration, the population evolves by retaining its best chromosomes, generating some by crossover and adding some more by mutation. 50 Chapter 3 Methodology

Being population-based, they are especially well suited to tackle multi-objective prob- lems where a combination of Pareto efficient solutions is of interest (Konak et al., 2006); we take advantage of this in Chapter4. In Chapter5 we also use a GA, in this case be- cause it is a methodology that lends itself well to deal with sequencing problems (Bean, 1994).

3.2.4.6 Other metaheuristics

In this section we have reviewed some of the classical metaheuristics and the ones relevant to this thesis. However, there are plenty of others available in the literature and it would be unrealistic to review all of them. Some of them propose interesting innovative ideas, but many just add slight variations of the same concepts: exploring neighbourhoods and sharing information between solutions.

In this sense, we would like to mention the work by S¨orensen(2015), that does an in- teresting review of the status of the field. They criticise the usage of excessive metaphors and obscure language that disguises some recent non-innovative algorithms, as well as pointing out avenues for future developments on metaheuristics, including matheuristics and hyper-heuristics algorithms, that we review next.

3.2.5 Matheuristics

Matheuristics consist in combining in an algorithm both heuristic and exact procedures. The result is a heuristic algorithm, since, more often than not, these algorithms cannot prove optimality. However, applying exact methods to some parts can dramatically improve the performance. There are a number of ways in which the combination can be done and we point the reader to the review by (Dumitrescu & Stutzle, 2003) or the taxonomy from Jourdan et al. (2009).

One of the most common approaches is, as identified by (Dumitrescu & Stutzle, 2003), to use an exact method to explore large neighbourhoods within a local search algorithm. Another common approach is to develop a constructive algorithms that makes opti- mal decisions at every step; this approach was used in a constructive algorithm for a 2D irregular strip packing problem by Martinez-Sykora et al. (2015). In a later work, Martinez-Sykora et al. (2016) propose another matheuristic, in this case for 2D irregular bin packing. They solve an one-dimensional bin packing problem exactly to determine the assignment of pieces to bins and then the feasibility of the assignment is checked with the previously mentioned constructive. If the assignment is not possible, a constraint is added to the initial one-dimensional model and the process starts again. Chapter 3 Methodology 51

In general, all the heuristic or metaheuristic algorithms for packing that include a compaction model – as the one we use in Chapter6 – would fall in the category of matheuristics as well.

3.2.6 Hyper-heuristics

In a nutshell, a hyper-heuristic is an algorithm designed to solve a certain problem by searching among heuristic algorithms to solve that problem. We find a more precise definition in Burke et al. (2010b):

“A hyper-heuristic is an automated methodology for selecting or generating heuristics to solve hard computational search problems”

In their definition, there are two clear categories arising, whether the hyper-heuristic selects or generates heuristic algorithms. The first category was introduced by Cowling et al. (2001), and shown in an implementation of a hyper-heuristic for the summit scheduling problem. The key idea here was to select among a series of local search neighbourhoods, thus very similar to variable neighbourhood search. The innovative part comes from the introduction of a mechanism that chooses what heuristics to apply next based on their recent and past performance on the problem. This mechanism can also consider the potential impact of applying two heuristics one after the other.

The other branch of hyper-heuristics is devoted to generate computer programs that solve a problem based on pre-existing building blocks typically used in heuristic algo- rithms. For example, genetic programming (Koza, 1994) is based on evolving a popula- tion of computer programs by combining their elementary parts in a genetic algorithm fashion. We find an example of this in Burke et al. (2006b), where this methodology is used to devise a hyper-heuristic algorithm for the online one-dimensional bin packing problem. The aim of the hyper-heuristic is to devise a rule that decides to what bin the next item should be allocated to. Interestingly, the system evolved to computer programs that mimic the behaviour of the well-known first-fit heuristic algorithm in the majority of the cases.

3.3 Conclusion

In this chapter we have covered geometry, divided into three approaches: polygonal representations, phi-objects and discrete representations. In this thesis we will use two of them, polygons in Chapter5 and a discrete representation in Chapter6. The optimi- sation methods that can be used are strongly influenced by the geometry chosen. For example, polygonal representations can be used to formulate integer linear constraints, 52 Chapter 3 Methodology while phi-objects and phi-functions result in non-linear programming models. Except from one-dimensional packing, where exact and approximation methods can solve many of the standard instances in the literature, most of the literature uses metaheuristics to solve irregular cutting and packing problems in practical settings. This thesis is no exception and, while we provide exact models when possible, we will also develop metaheuristic algorithms in our three research chapters. Chapter 4

Efficient management of heterogeneous helicopter fleets

Abstract The management of military helicopter fleets includes the evaluation of the capabilities of existing fleets and the planning for future helicopter acquisitions, based on the necessary aircraft to perform a specific set of missions. In this work, we present a mathematical description for the mission planning in this context, which is mainly based on bin packing and load balancing. This is a new problem to the literature which includes multi-objective vector bin packing with heterogeneous bins, multiple choice positioning and conflict constraints.

We present three different algorithms; one heuristic, one metaheuristic and a mixed integer linear program. Our heuristic approach is a generalization of the well known First Fit algorithm and the metaheuristic is a genetic algorithm, both are coupled with a load balancing heuristic. The MILP is able to solve the packing and load balancing together. All three are implemented in such a way that they are able to find the solution in the pareto frontier. We present results for a range of instances includes data drawn from practical applications.

Keywords: Cutting and packing, Heuristics, Integer programming, Military, Multi- objective, Multiple choice, Optimization, Strategic investment planning

4.1 Introduction

In this chapter we study a problem that arises in the planning of investment into military assets, in this case helicopters. While we focus on this specific example, the approaches 53 54 Chapter 4 Efficient management of heterogeneous helicopter fleets can be generalised to a range of strategic resource planning scenarios with multiple capacity constraints. The problem was proposed by The Defence Science and Technology Laboratory (Dstl), which is part of the Ministry of Defence in the United Kingdom. The aim is to assist in the strategic planning of helicopter fleets, by evaluating the required fleet capacity to execute likely future missions. We develop optimisation models for loading the minimum number of helicopters with the necessary items and personnel for a particular mission. By considering various pools of missions that may need to be delivered in the future we identify a set of Pareto efficient helicopter fleets containing multiple helicopter types.

Each mission requires the transportation of a set of items with a certain set of capacity requirements. The items can be transported by different helicopter types and each helicopter type has a different travel distance profile that is directly effected by the weight they are carrying. Our approach models the problem as a bin packing problem with some additional constraints.

The bin packing problem, and the closely related cutting stock problem, are NP-hard (Garey & Johnson, 1979). There are numerous industry examples of these problems where the items to be packed or cut may be one, two or three-dimensional objects. They consist of assigning a set of small items with known weight or size and shape to the minimum number of bins. Bins may be homogeneous and therefore have the same capacity, or heterogeneous. For the problem tackled here, items have more than one capacity requirement, for example a passenger in a helicopter will require certain weight capacity and a seat to be available. In this case, the problem is denoted as vector bin packing or multi-capacity bin packing and is well known in the literature, not only for cutting and packing problems, but also for job scheduling (Leinberger et al., 1999) and other applications.

Vector bin packing problems often consider a multiple choice constraint. This con- straint models the situation where an item can be placed in a bin in more than one way. In this case the item will have various incarnations, which are different vectors of requirements, where only one of them has to be met. For example, in our helicopter loading application, most items can be either placed underneath the helicopter hanging by hooks, known as underslung, or placed inside the fuselage. If they are underslung, they will require hooks, whereas if they are placed inside the cabin they may reduce the number of available seats due to the physical space needed for the item. Furthermore, certain types of items have special placement rules. For instance, troops cannot be un- derslung or chemicals cannot share the same space as food supplies. These constraints allow us to model realistic situations where the solutions provide an accurate starting point for operational planning.

Since the model intends to inform strategic investment in a helicopter fleet, it is neces- sary to find a set of solutions that includes all efficient combinations of types of aircraft. Chapter 4 Efficient management of heterogeneous helicopter fleets 55

While, there may be a solution that gives the minimum number of helicopters for a certain mission, when taking into account the full pool of missions, an alternative com- bination of helicopters may give a better solution for the entire fleet planning problem. As a result, our model identifies all the possible configurations of helicopters on the Pareto frontier. Finally, a key determinant of the distance a helicopter can fly is the weight it is carrying. With this in mind, it is important to maximise the flight range of the fleet by evenly distributing the cargo between aircraft. These two properties of the problem suggest to a multi-objective formulation arising from minimising across mul- tiple bin types, for which all the non dominated solutions are needed, and maximising the distance. According to Dstl, minimising the number of helicopters is their primary objective, therefore distance is set as a secondary objective.

This is a new problem in the literature which, to our best knowledge, has not been studied before. In this work, we present three solution approaches, an exact model, a simple construction heuristic and a metaheuristic. Our exact approach is a mixed integer linear program which is able to find the optimal solution in reasonable time for small instances. For large instances, we propose a very fast heuristic algorithm, adapted from the First Fit Decreasing, that constructions a single solution and a genetic algorithm that uses the construction heuristic and is able to explore the solution space more widely.

There is a signficant body of literature on cutting and packing, but no paper that ad- dresses this specific problem. While we consider are multiple capacity requirements, we do not consider the physcial dimensions of the items. As a result, we classify the prob- lem as an one-dimensional bin packing problem. Classical one-dimensional bin packing problems are widely studied in the literature. They have been addressed using exact methods, where for example Martello & Toth(1990a) proposes a branch-and-bound technique and de Carvalho(1999) presents an efficient branch-and-price. Furthermore, heuristics have also been successfully applied to bin packing. For example, Coffman et al. (1984) proposes approximation algorithms and (Fleszar & Hindi, 2002) solves the problem using metaheuristics. The natural extension to one-dimensional problems is multi-capacity and vector bin packing, which has also attracted a lot of attention by researchers, since it is a common problem in the industry. Exact approaches to solve the problem with two capacities, known as two-dimensional vector bin packing or 2-DVBP, are given by Spieksma(1994) and Caprara & Toth(2001). In addition, Kellerer & Ko- tov(2003) and Shachnai & Tamir(2012) developed approximation algorithms for this problem. However, due to its complexity, this problem is often solved by metaheuris- tics, see for example Dahmani et al. (2014) and Dahmani et al. (2013). One interesting version of this problem that generated a lot of research interest has been proposed by Google in the ROADEF / EURO 2012 challenge, where a machine reassignment problem was modelled as a vector bin packing problem with various special constraints, includ- ing conflict constraints for jobs. The best results of this challenge were obtained by the sophisticated local search of Gavranovi´c& Buljubaˇsi´c(2014) and Gavranovi´c et al. 56 Chapter 4 Efficient management of heterogeneous helicopter fleets

(2012). Other techniques producing competitive results include MILP based metaheuris- tics (Ja´skowski et al., 2015) and hyper-heuristics (Hoffmann et al., 2015).

Multiple choice is a well known constraint in multidimensional bin packing problems. It has been recently applied in vector bin packing problems by Patt-Shamir & Rawitz (2012). Before that it has been thoroughly investigated for the closely related multi- dimensional knapsack problem. Examples of this include Mostofa Akbar et al. (2006), where the problem is solved by a convex hull based heuristic approach or Sbihi(2006), where they present a branch-and-bound algorithm, which is able to find the optimal so- lution. In Cherfi & Hifi(2008) a column generation procedure is applied to the problem, which is able to find optimal solutions for some instances in the literature in reasonable computational times.

The usual objective of bin packing is to minimise space utilisation, however, the need for including an additional objective often arises in practice. In particular, when considering transportation applications, balancing the load enhances the fuel efficiency and can also enhance the autonomy of the vehicles. This is taken into account in Liu et al. (2008), where they propose a particle swarm optimization algorithm. The concept of Pareto efficiency is also frequently studied, when the two objectives are in conflict and many solutions should be explored. An example of this is the two-dimensional vector bin packing problem studied in Dahmani et al. (2013), where they consider a hard placement constraint and a soft placement constraint and the objectives are to minimise the bins used while fulfilling the soft constraint as much as possible. Zhou et al. (2011) points out in their state-of-the-art review that evolutionary algorithms have the advantage of being able to appoximate the Pareto frontier in only one run.

The remainder of the chapter is organised as follows. In section 4.2 we provide a formal problem description. Section 4.2.2 describes the MILP and sections 4.2.3 and 4.2.6 describe the heuristic and metaheuristic approaches respectively. In section 4.2.7 describes an iterative approach to find all possible configurations of bin types in a so- lution. Section 4.3 provides details of the implementation of the algorithms, the results and analysis and finally section 4.4 summarises the contributions of this work.

4.2 Problem description

As described in the introduction, the problem at hand is to decide how many of each type of helicopter should make up a given fleet. An important input into this decision is the minimum number of helicopters needed to sucessfully complete a range of missions. Our focus is solving this component of the problem, which we are modelling as a bin packing problem. This section provides specific details of the bin packing problem. Chapter 4 Efficient management of heterogeneous helicopter fleets 57

Bin packing The problem we are solving is a multi-objective vector bin packing problem, with multi- ple choice constraints, special placement constraints and multiple capacity constraints. According to W¨ascher et al. (2007) typology of cutting and packing problems, the bin packing problem itself is a multiple bin size bin packing problem (MBSBPP). The MB- SBPP consist of assigning a collection of n small items to a set of bins while respecting the placement and capacity constraints. There are multiple bin types with different capacities and several of each bin type may be selected in a solution. The objective is to minimise the number of bins. Let B be the set of bin types available, and we assume that there is an infinite number of each type that can be used, allowing solutions that may use only one bin type. The problem studied in this chapter considers three capac- ities for each bin type j ∈ B. These are the weight capacity, Wj, the number of seats available, Sj, and the number of hooks for underslung items, Hj.

Let I be the set of small item types for a given mission. Each item type i ∈ I has a specific demand di, and a requirement of wi units of weight, si seats and hi hooks. To guarantee that the problem can be solved with any type of bin, we impose the following restriction wi ≤ Wj, si ≤ Sj and hi ≤ Hj for all i ∈ I and j ∈ B, i.e., each item can fit in any bin. Note that some items can be placed without using any seats or hooks, since the helicopters include some cargo space .

According to Dstl, the weight is the critical constraint and it is rare to carry cargo that will fill the physical space in the helicopter without exceeding the weight. Moreover, our investigation identified that most instances are limited by the weight rather than the number of seats or the number of hooks. This fact is exploited by the genetic algorithm described in Section 4.2.6, and allows us to obtain more accurate lower bounds.

Placement constraints Items can be placed inside the aircraft, and therefore no hooks are required, or under- neath so no seats are required. Depending on the item type, the item may be restricted to be inside the aircraft or outside. If items can be place either inside or underneath this leads to multiple choice capacity requirements. In particular, we classify the items into three types:

• Inside only - These are items such as passengers which cannot be underslung. We denote these items by I1 ⊆ I.

• Outside only - Typically large items such as vehicles that are not suitable to go inside the aircraft. We denote these items by I2 ⊆ I.

• Double placement items - Medium / large size items such as quad bikes or large boxes that if placed inside may diminish the seat capacity and if underslung will use one or several hooks. These items can be modelled as two different items 58 Chapter 4 Efficient management of heterogeneous helicopter fleets

with different requirements while including a constraint that only allows one to be assigned to an aircraft in the final solutions. We denote these items by I0 ⊆ I.

Note that I = I0 ∪ I1 ∪ I2. The double placement items (I0) introduces a decision element to the problem, which makes the problem substantially different from the MB- SBPP.

Furthermore, some missions might require the transportation of diverse cargo, which cannot be placed in the same space. This could be for health and safety reasons or for strategic reasons. This type of constraints are called conflict constraints and ensure that certain item types do not share the same bin, regardless of their position (inside or underslung). For each item i ∈ I we define a set, Iˆi, that contains item types that cannot be placed in the same space as item i.

Objectives The usual objective in bin packing problems is to minimise the total cost of the bins, which is calculated by the sum of the number of bins multiplied by a weight for each bin type. Typically the weight is associated with the bin’s size. However, given the nature of our application, the value of the bins cannot be determined in advance due to external factors, such as the location of the mission, the weather, or the demands from other activities. Therefore, it is necessary to search the Pareto frontier and find all the non dominated solutions; this way, when the bin cost is known it is possible to retrieve the best configuration for the mission.

A feasible solution is given by the number of each bin type in the solution, u =

(u1, . . . , uB), and a set of bins bjk = {n1jk,..., n|I|jk, n1jk, . . . , n|I|jk}, t ∈ B, uj > 0, k ∈ {1, . . . , uj}, where nijk (nijk) represents the number of items of type i ∈ I assigned to the inside (underneath) of bin k of type j.

The dominance criterion between solutions is established as follows. Let u and v be two feasible solutions. We say that u dominates v if the following conditions are satisfied.

• uj ≤ vj, ∀j ∈ B

0 • There is one bin type j ∈ B such that uj0 < vj0

Once the minimal set of bins u is found for each non-dominsated solution, the sec- ondary objective is to find a placement of the cargo which ensures the maximal oper- ational capability of the fleet. The operational capability of a fleet is defined by the minimum of the maximum flight distance of each of the helicopters in the fleet. The Chapter 4 Efficient management of heterogeneous helicopter fleets 59

distance is calculated as a function of the weight ω which is loaded in the aircraft, fj(ω), j ∈ B. This is a piecewise linear function, given for each helicopter type j by  1 1 αj ω + βj , ω ≤ Wj − Fj fj(ω) = 2 2 αj ω + βj , ω > Wj − Fj

1 1 2 2 where the coefficients αj , βj , αj and βj are given by the helicopter type and Fj denotes the maximum weight of fuel that can be loaded into a helicopter of type j, which is always less than half of the total weight capacity. The performance function for three typical types of aircraft is shown in Figure 4.1.

1800 Puma 1600 1400 Chinook Merlin

(km) 1200

1000 range

800 600

Flight 400 200 0 0 5000 10000 15000 Weight (kg)

Figure 4.1: Performance graph of two types of aircraft

The distance the helicopter can fly decreases as more weight is loaded into it. If the loaded weight reaches the threshold where it requires the helicopter not to be fully loaded with fuel, the flight range decreases quicker. This is a simplified model of the flight range, which does not take into account local atmosphere conditions or other relevant factors at the time of flying, but captures the most important available information at the time of designing the strategic plan.

In the following sections, we describe three different approaches to solve the helicopter loading problem, one is exact and directly solves the mathematical formulation of the problem, a second is a simple single pass constructive algorithm (CA), and the third employs a genetic algorithm (GA). Recall our aim is to find the pareto set of solutions across multiple bin types. While the GA can achieve this directly, the MILP and CA require a different approach. For these approaches we solve a more constrained version of the problem designed to identify one of the non-dominated solutions. We iteratively solve the problem varying the constraints between iterations. In section 4.2.7 we show that solving this problem iteratively is sufficient to find all non dominated solutions of the problem.

For all approaches we sort the bins in size order from largest to smallest and fix the number of bins for all types other than bin type one. These fixed values are denoted by u2, . . . , uB. Based on these values, the approaches will try to minimise the number of bins of type one (u1). 60 Chapter 4 Efficient management of heterogeneous helicopter fleets

4.2.1 Bounds

In order to set the upper bound for the exact procedure we use the CA described in Section 4.2.3.

We propose two lower bounds procedures:

• Weight linear relaxation - The first lower bound is trivial. It is given by the weight of the items. First, we calculate the weight of the items that helicopters

u2, . . . , uB can carry and subtract this value from the total weight of all the items. The remaining weight is the weight that needs to be assigned to bins of type one. Hence, the bound is simply this quantity divided by the capacity of bin one. The formula for the bound is as follows:

( &P w d − P u W ') 1 i∈I i i j∈B\{1} j j u1 = max 0, (4.1) W1

• Hooks and seats relaxation - The availability of hooks and seat may influence the number of helicopters before the weight constraint is met. Since many items can be placed inside or underneath the helicopter, the lower bound for hooks and seats is slightly more complex. To calculate a tight lower bound we evaluate the minimum number of seats or hooks needed in helicopter type one. In order to do this we compute the maximum number of seats that can be saved by placing items underneath bins of type i ∈ {2,...,B} by solving the following knapsack problem,

X max sixi (4.2) i∈I0 X X s.t hixi ≤ ujHj (4.3) i∈I0 j∈B\{1}

where xi takes the value 1 if item i ∈ I is placed underneath any of the helicopters. 0 We denote S as the optimal solution of this problem. Similarly we can compute 0 H as the maximum number of hooks we can save by placing items inside the fixed set of helicopters. These two problems can be solved in pseudo-polynomial time to optimality by dynamic programming. For bin type one, which has no fix number, we compute the maximum number of seats (hooks) we can save by

placing items underneath (inside) a single bin, denoted by S˜1 (H˜1). These values can be calculated by solving a similar knapsack problem as given by (4.2)-(4.3),

where the right hand side of (4.3) becomes H1 (S1). The lower bound for seat (hooks) is the maximum seats (hooks) that are needed to pack I1 and I0 items inside (underneath) minus the total seats available in the fixed helicopter, minus Chapter 4 Efficient management of heterogeneous helicopter fleets 61

the seats saved by packing I0 items outside (inside), as follows:

 P P 0   i∈I0∪I1 sidi − j∈B\{1} ujSj − S u2 = max 0,   , 1  S + S˜    1 1  (4.4) P P 0  i∈I0∪I2 hidi − j∈B\{1} ujHj − H     H + H˜   1 1 

 1 2 The lower bound used in the model is the maximum of these two, u1 = max u1, u1 .

4.2.2 Exact method

In this section we formulate the problem described in section 4.2 as a mixed integer linear program. There are three groups of variables used in the model, these are bin usage, item placement and flight distance.

Bin usage variables - Given the upper boundary vectoru ¯ = (¯u1, u¯2,..., u¯|B|) we cre-

ate the following binary variables: y11, . . . , y1¯u1 , y21, . . . , y2¯u2 , . . . , yB1, . . . , yBu¯B , where th yjk = 1 if the k bin of type j ∈ B is used in the solution and 0 otherwise. Since the

solution must use all uj bins of type j when j > 1, then we fix yjk = 1 for j > 1 and

k ≤ uj and yjk to 0 for j > 1 and k > uj. For our problem, we can remove these fixed variables from the objective function. However, we include them in the formulation because this makes the model more general. For example, for the traditional vector bin packing problem where the bins are assigned different costs.

Item placement variables - For each yjk, we add 2|I| integer variables, namely qijk

and rijk, whose value indicates how many items of type i ∈ I are placed inside (qijk) or

underneath (rijk) bin k of type j ∈ B. For item types i ∈ I that cannot be placed in the

same bin with at least one other item type, given in set Iˆi, we define a binary variable

pijk which take the value 1 if any item of type i is placed inside or underneath bin k of type j, and 0 otherwise.

Distance related variables - The flight range of a solution is the minimum of the maximum distance the helicopters in the fleet can travel, represented in the model by the variable δ ∈ R. The maximum distance any helicopter in a fleet could travel with no load is denoted by D,

D = max fj(0) (4.5) j=1,...,B

Recall that the distance function for each helicopter type is piecewise linear (fj). In

order to model fj, for each variable yjk, we add a binary variable zjk, which takes value

1 if the loaded weight in the bin indicated by yjk is larger than Wj − Fj and 0 otherwise. 62 Chapter 4 Efficient management of heterogeneous helicopter fleets

The helicopter loading problem can be formulated as follows:

uj X X δ minimise y − , (4.6) jk D j∈B k=1 s.t.

1 X 1 δ ≤ αj (qijk + rijk)wi + βj + zjkM + (1 − yjk)M, j ∈ B, k = 1, ..., uj i∈I

(4.7) 2 X 2 δ ≤ αj (qijk + rijk)wi + βj + (1 − zjk)M + (1 − yjk)M, j ∈ B, k = 1, ..., uj i∈I (4.8) P i∈I (qijk + rijk)wi zjk ≤ j ∈ B, k = 1, ..., uj Wj − Fj (4.9) P i∈I (qijk + rijk)wi zjk ≥ − 1 j ∈ B, k = 1, ..., uj Wj − Fj (4.10)

X (qijk + rijk)wi ≤ Wj j ∈ B, k = 1, ..., uj (4.11) i∈I X qijksi ≤ Sj j ∈ B, k = 1, ..., uj (4.12) i∈I X rijkhi ≤ Hj j = 1, ..., B, k = 1, ..., uj (4.13) i∈I uj X X qijk + rijk = di i ∈ I (4.14) j∈B k=1 qijk + rijk yjk − ≥ 0 j ∈ B, k = 1, ..., uj, i ∈ I (4.15) di

X X (qijk + rijk)wi − (qij(k+1) + rij(k+1))wi ≤ 0 j ∈ B, k = 1, ..., ui − 1 (4.16) i∈I i∈I

2 qijk = 0 i ∈ I , j ∈ B, k = 1, ..., uj (4.17)

1 rijk = 0 i ∈ I , j ∈ B, k = 1, ..., uj (4.18) Chapter 4 Efficient management of heterogeneous helicopter fleets 63

0 ˆ pijk + pi0jk ≤ 1 i ∈ I, i ∈ Ii, j ∈ B, k = 1, ..., uj (4.19)

qijk + rijk pijk ≥ i ∈ I, j ∈ B, k = 1, ..., uj (4.20) di

qijk, rijk ∈ N i ∈ I, j ∈ B, k = 1, ..., uj (4.21)

pijk ∈ {0, 1} i ∈ I, j ∈ B, k = 1, ..., uj (4.22)

zjk, yjk ∈ {0, 1} j ∈ B, k = 1, ..., uj (4.23) δ ≥ 0 δ ∈ R (4.24)

P Puj The first part of the objective function, j∈B k=1 yjk, in equation (6.3) minimises δ the number of helicopters loaded with items, while the second part, − D will be at its minimum when the flight distance δ is maximised. By definition, δ < D and therefore δ D < 1. Since the first part is integer, this results in a lexicographic objective function. It will always prefer a solution with fewer bins and, among the solutions with the same number of bins, it will prefer the solution that maximises the distance.

Constraints (4.7) and (4.8) use big-M constants to activate one of these two inequalities and deactivate the other. These reflect the two parts of the flight distance graph fj that bound the total distance that the fleet can travel according to the weight on each helicopter. Since these constraints are related to distance, a tight value for M is M ≥ D, which is what we use in our computational experiments. Inequalities (4.9) and (4.10) set

the value of binary variable zjk so it reflects the correct section of function fj. Constraints (4.11), (4.12) and (4.13) constrain the utilisation of weight, seats and hooks, respectively, assigned to each helicopter to be within capacity. Equalities (4.14) force the solution

to meet the demand for all the item types. Constraint (4.15) sets yjk = 1 if any item is assigned to helicopter jk therefore counting that helicopter in the objective function.

If no items are assigned to helicopter jk, yjk will be set to zero in order to minimise the objective function. In order to avoid symmetric solutions we use inequalities (4.16), which force the use of bins of a given type by non-increasing weight. Equalities (4.17) and (4.18) eliminate the variables associated with items that cannot be placed inside or underneath the bins. Finally, inequalities (4.19) and (4.20) prevent the placement of

item i in the same bin as any item in its incompatible set Iˆi.

The complexity of this model increases with the number of item types present in the instance and with the number of bins in the solution. However, if we provide the solver with a reasonably tight upper bound, it is possible to solve medium size instances to optimality using CPLEX R solver. See section 4.3 for detailed results. 64 Chapter 4 Efficient management of heterogeneous helicopter fleets

4.2.3 Constructive

The constructive algorithm (CA) is an adaptation of the well known First Fit algorithm. The input of the algorithm is the number of bins of each type available with the exception of bin type one, where uj, j ∈ B \{1} is the number of bins available of each type. The P algorithm also requires as an input a permutation of the j∈B\{1} uj bins, a permutation of the items and a placement rule for items in I0, which are the items that can be placed inside or underneath. In order to obtain a good diversification when building solutions we use multiple random permutation of the items and we sort the bins by non-increasing weight capacity.

The algorithm works as follows. Taking items in permutation order, iteratively add each items to the first bin in the permutation that can accommodate it, while respecting all the capacity constraints. If the item does not fit in any of the bins in the permutation, then add a new bin of type one to the permutation. Note that all bins stay open until all items are packed. Since we assume that all the items can fit in any bin type, we can always build a feasible solution. The decision of where the item should be placed (inside or underneath) depends on the following placement rules.

4.2.4 Placement rules

– For items in I0, if the first bin that can accommodate an item has capacity to place it either underneath or inside, the algorithm decides the position according to one of the following rules.

• Random - Randomly assign the item to either position with the same probability.

• Fixed inside (underneath) - Constantly assign items inside (underneath).

• Strict fixed inside (underneath) - Assign all I0 items inside (underneath) ignoring all feasible placements underneath (inside). Despite the fact that this as- signment is likely to produce worse solutions, this procedure increases the diversity of solutions. For example, some items infrequently find feasible placements either inside or underneath due to the greediness of the other placement rules.

• Bin-biased - This rule looks at the relative usage of the seats and hooks of the bin in which item i can be placed, if it was placed inside or underneath. The relative seat utilisation of bin b is given by:

S˜b Us = (4.25) Sb

where S˜b denotes the number of seats being used after placing i inside bin b and

Sb represents the total number of seats in b. Similarly, the relative utilisation of Chapter 4 Efficient management of heterogeneous helicopter fleets 65

hooks is given by: H˜b Uh = (4.26) Hb

where H˜b denotes the number of hooks being used after placing i underneath bin

b and Hb represents the total number of hooks in b. We place item i inside if

Us ≤ Uh, otherwise it will be placed underneath.

• Solution-biased - This rule considers the current partial solution and the list of 0 unplaced items. Let Iu be the set of unplaced items that can be placed either 1 inside or underneath, Iu be the unplaced items that can be placed inside only and 2 ˜ ˜ Iu be the unplaced items that can be placed underneath only. Let Sjk (Hjk) be the number of seats (hooks) available in the current bin permutation. We compute an estimation of the number of seats needed to place all the items assuming that 0 items in Iu are always placed inside the bins as follows:

P P uj i∈I0 si + j∈I1 sj X X S = u u − S˜ . (4.27) e 2 kl k∈B l=1

Similarly, we compute an estimation on the number of hooks needed assuming that 0 items in Iu are always placed underneath the bins,

P P uk i∈I0 hi + j∈I1 hj X X H = u u − H˜ . (4.28) e 2 kl k∈B l=1

Then, we place the item inside the bin if the following condition is satisfied, oth-

erwise place underneath, where S1 and H1 are the number of seats and hooks in bin type one.

S H e ≤ e (4.29) S1 H1

Dividing the lower bound on seats and hooks by S1 and H1 respectively provides an estimate of the number of helicopters of type one needed. Note that when there is sufficient capacity in the fixed helicopter, these numbers may become negative. However, this condition will still provide a sensible decision rule, since we will place the item in such a way that it is less likely to need an extra bin in the current solution. Note that this rule can be used to decide what item to place next. If the estimation of the helicopters needed in terms of seats is the largest, a item with the largest requirement of seats will be placed next and, if possible, it will be placed outside. A similar estimate can be made for weight, and, with the three estimations, one can alternate between ordered lists by seats, hooks and weight to decide the next items. 66 Chapter 4 Efficient management of heterogeneous helicopter fleets

4.2.5 Distance balance heuristics

Due to the greedy nature of the CA, the resulting solutions will typically have some bins very loaded and some bins almost empty. The flight range of these kind of solutions is very poor, as very loaded bins have a short flight range, limiting the fleet range. To overcome this, we use a simple distance balance heuristics, which is a version of the well known load balance heuristic. The algorithm sorts the bins by flight range distance and aims to move items from the bin with the shortest flight range to the bin with the longest flight range. Considering items in the first bin, starting from the heaviest item, attempt to move (or swap with a lighter items) an item to the last bin. Accept the first move or swap that improves the overall flight range. Once all the possible moves from the first bin to the last have been tried, the algorithm re-sorts the bins by flight distance and repeats the process until no more moves are possible. Note that in order to perform these moves, double placement items can change their position to accommodate new items.

While the constructive algorithm requires a low computational effort, the quality of the solutions is highly variable, as the item permutation plays a crucial role on the final solution. Furthermore, regardless of the placement rule for I0 items, the items are placed in a greedy manner, which is often detrimental to the final solution. Due to the large amount of combinations of rules and permutations that can be generated for a relatively large list of items and the need of searching a wider solution space we have developed a genetic algorithm, GA, which is presented in the next section.

4.2.6 Genetic Algorithm

In this section we describe a genetic algorithm (GA) for the helicopter loading problem. While the MILP model presented in Section 4.2.2 can solve some problem instances to optimality, as the number of items and helicopter types grow, this approach becomes less practical. Moreover, the constructive algorithm (CA) is unlikely to provide con- sistent good quality solutions. As a result, it makes sense to consider a metaheuristic with more effective search capability. GAs are a well established optimisation approach and proved to be effective across a wide variety of problem domains. For a detailed tutorial on GAs see Whitley(1994). GAs are a type of evolutionary algorithm. Due to their versatility, they have been used for a variety of complex packing problems, see for example Bennell et al. (2013) who develops a multi-crossover GA for a 2D packing with due dates, Ikonen et al. (1997) for an early 3D packing application or Gon¸calves & Resende(2013) for an example of a biased random key GA applied for 2D and 3D bin packing. According to Konak et al. (2006), an advantage of GAs is that they are especially suitable for approximating the Pareto frontier in one run. Chapter 4 Efficient management of heterogeneous helicopter fleets 67

In the following sections we go through the design details of the GA. First we describe the chromosome representation in section 4.2.6.1 followed by the crossover and mutation operators, including the selection approach, in sections 4.2.6.2 and 4.2.6.3 respectively. Section 4.2.6.4 gives details of the initial population and section 4.2.6.5 defines the fitness function. Finally, sections 4.2.6.6 and 4.2.6.7 provide full details of the algorithm and the parameters we use in our experiments. Note the parameters arose from initial experimentation and the distance balance heuristic is the same as the approach described in section 4.2.3.

4.2.6.1 Chromosome representation

The chromosome does not follow a traditional binary or permutation structure that lends itself to the many common crossover operators. Instead a chromosome C contains a list of bins and their types alongside an assignment of the items to the interior or exterior of each bin. This assignment is always feasible. The chromosome also includes one of the placement rules defined in Section 4.2.3, which dictates how I0 items are placed in the solution. The placement rule is passed on to offspring through crossovers.

4.2.6.2 Crossover operator

The crossover is a binary operator that combines two chromosome solutions to create one or more offspring. The population is divided into non-dominated solutions and dominated solutions, and sorted according to their fitness (see section 4.2.6.5). The crossover chooses the parents such that all the fittest non-dominated chromosomes take part in the crossover. The number of offspring in the solution is the same as the number of bin types in the instance. Each offspring will maintain the same number of bins of each type as the non-dominated parent, except for one, that is the type that the operator tries to reduce.

We propose a non-standard crossover, based on a two-level fitness. On the one hand we have the standard solution-wise fitness, on the other hand we have a bin-wise fitness, that is used to determine which bins from a chromosome are more efficiently packed and should be reused in subsequent generations. A formal definition of bin fitness is given in equation 4.30.

In the crossover operator, the generated offspring inherit bins iteratively from each of the parents, where bins from the fittest parent have a slightly higher probability of

being chosen (δe) and the inherited bin is the fittest of that parent’s bins. During the process, the operator ensures that the resulting configuration of bins in the offspring is maintained, by forbidding some additions. After each bin is added to the offspring from one of the parents, the items in that bin are removed from the bins of the other 68 Chapter 4 Efficient management of heterogeneous helicopter fleets parent, therefore ensuring that both parents have the same set of items remaining. If these items appear in more than one of the remaining bins, they are always removed from the one with the worst fitness.

Clearly as bins are removed from each parents, the quality of the remaining bins reduces. Hence, after both parents have only a certain number of bins left, say nb, the process stops and the remaining items are placed using a local search heuristic followed by the CA described in section 4.2.3. The local search tries to improve the utilisation of the bins passed to the offspring by attempting to swap the unallocated items with lighter items currently contained in the bins of the offspring. Once no more swaps are possible, we use the CA to place any remaining unallocated items using the placement rule from one of the parents. If the CA needs to add new bins to the offspring, their type is the one that aims to be minimised in that particular offspring.

The bin fitness for each bin in each chromosome is intended to measure the impact that particular bin has on the total placement requirements of the complete instance.

The fitness of the bin k of type j ∈ B, k ∈ {1, . . . , uj}, from chromosome C is given by:

X (qijk + rijk) · wj X (qijk + rijk) · sj X (qijk + rijk) · hj φ(j, k, C) = µw + µs + µh Wj Sj Hj i∈I i∈I i∈I (4.30)

The coefficients µw, µs and µh determine the importance of each of the capacities in this instance and are given by:

Wj µw = 1 − (4.31) Tw Sj µs = 1 − (4.32) Ts Hj µh = 1 − (4.33) Th where Tw,Ts and Th are respectively the total amount of weight, seats and hooks avail- able in the solution. This bin fitness ensures that the algorithm promotes bins that are carrying items that have a greater impact on the overall solution and encourages them to be placed in the best possible position. Note that the seat and hook requirements of I0 items are both counted, regardless of whether they are placed inside or underneath.

Figure 4.2 illustrates the core part of the crossover process where nb = 2. Here there is an elite and a non-elite parent, being the elite chromosomes those with the highest fitness in the solution. The first step selects the elite parent and the best bin is passed to the offspring and the items are removed from the non-elite parent. In the second step the best bin from the non-elite parent is passed to the offspring and these items are removed from the elite parent. The third step the elite parent is selected and of the three bins each with one item inside, one of the two identical bins with the larger item are passed to the offspring. Removing this item from the non-elite parent leaves Chapter 4 Efficient management of heterogeneous helicopter fleets 69

both parents with nb bins. In this case local search can not improve the allocation. The remaining items are placed in the third bin using the CA.

Elite parent Non-elite parent

P. rule: Fixed inside P. rule: Biased

Offspring

Placed by the constructive algorithm

P. rule: Fixed inside Figure 4.2: Crossover example where rows represent the bins and the shaded blocks are the items

The full process is described in Algorithm1.

Algorithm 1 Crossover

1: Given two chromosomes (parents), Ce and Cm: 2: Generate an empty offspring, Co 3: while at least nb bins remaining in each parent do 4: With probability δe select Ce as the giving parent 5: With probability 1 − δe select Cm as the giving parent 6: B: best bin from the giving parent 7: Add B to Co 8: Remove B from the selected parent 9: Remove the items contained in B from the worse bins of the other parent 10: (if more than one choice is available, remove from where more relative bin ca- pacity is released) 11: end while 12: 13: Let I be the set of items to be placed . Start local search 14: for i ∈ I do 15: for j = 1,..., |B| do 16: for k = 1, . . . , uj do 17: Swap i with one lighter item of bin k of type j, accept if feasible 18: end for 19: end for 20: end for 21: 22: . Place last items 23: With probability δe, pass to Co the placement rule from Ce, otherwise from Cm Sort the remaining items in I by decreasing weight and add them to Co using the constructive algorithm with the placement rule of Co 70 Chapter 4 Efficient management of heterogeneous helicopter fleets

4.2.6.3 Mutation

It is common to use a mutation operator to increase the diversity of the population. In our case, we do not mutate an existing solution. Instead we generate new chromosomes using the constructive algorithm with random item orders, placement rules and bin types and add these to the next generation. This way we ensure that new placement rules and layouts are constantly being added, regardless of their quality.

4.2.6.4 Initial population

The initial population of the GA has two parts. The first part contains an approximation of the Pareto frontier given by the CA run with a variety of placement rules. This allows us to add as many non-dominated solutions as possible to the population, which will then be improved by the genetic algorithm. The remaining chromosomes are generated also by the CA with a random placement rule, random assortments of bins and a random permutation of the item order. This provides greater diversity and supports the GA in exploring different parts of the solution space.

4.2.6.5 Fitness function

The objective of the MILP and the CA is to minimise the number of type one bins given a set of fixed bins. The GA is designed to find all the non-dominated solutions across all bin types. Hence here the objective is to make most efficient use of a combination of bin types. In GA terminology, the fitness of a chromosome is a measure of its solution quality and is the means of identifying the elite solutions. We use the following fitness function where higher fitness is better i.e. we maximise fitness:

  uj P !2 wj P P 1 X X i∈Ijk i∈I1 disi i∈I2 dihi Φ(C) =  + P + P  (4.34) b + 2 Wj ujSj ujHj j∈B k=1 j∈B j∈B

Since weight is most often the dominant constraint, the fitness of the chromosomes is P 2  wj  largely based on the loaded weight of the bins. Note that the first term, P Puj i∈Ijk j∈B k=1 Wi has a maximal value of b, which is the total number of bins. Hence, at its maximum, all P i∈I1 disi helicopters are carry their full weight capacity. The second and third terms, P j∈B uj Sj P i∈I2 dihi and P , respectively, are bounded by 1. For example, in the case of seats, this j∈B uj Hj value can be achieved only if the I1 items use all the seats available in all the bins in the current solutions and, therefore, no item in I0 can is placed inside. Note that these two terms help identify the good placements for the items in I0 with respect to where Chapter 4 Efficient management of heterogeneous helicopter fleets 71 there is spare capacity. Finally, we divide the sum of these terms by b + 2 so all the values are bounded between 0 and 1. Note that the value 1 can be achieved only if all the helicopters are fully used in terms of weight, seats and hooks and I0 = ∅.

4.2.6.6 Algorithm description

In this section we provide an overview of the GA. For each generation, the algorithm divides the population into a set of elite solutions and non-elite solutions. Based on this, the GA creates one offspring by performing crossover between an elite solution and a randomly chosen individual from the population. The next generation is made up of all elite individuals, all the newly created offspring, any non-dominated solution not included in the elite set and a number of new chromosomes generated by mutation. The key steps in the GA are detailed in the pseudo-code in Algorithm2:

Algorithm 2 Genetic algorithm 1: Generate initial population P 2: for it = 1, ..., Imax do 3: Create an empty population, Pnext (next generation) 4: Pe ⊂ P : are the elite parents, the best bPe · Npc chromosomes of P 5: Add Pe to Pnext 6: for elite and all non-dominated chromosomes Ce do 7: Cm randomly selected from P 8: Co = crossover(ρe, Ce, Cm) 9: Add the offspring Co to Pnext 10: end for 11: Check P for chromosomes non-dominated by others in Pnext and add them to Pnext 12: Add random feasible solutions to Pnext until it has Np elements (mutation) 13: P = Pnext 14: if stop criteria fulfilled then 15: break 16: end if 17: end for

Stopping criteria - The algorithm will stop if the best chromosome of the population is not improving its fitness function after a certain number of iterations, or if a maximum number of iterations (Imax) or a maximum elapsed time is reached.

4.2.6.7 Parameters

There GA described above requires a number of parameters that need to be calibrated depending on the instance we aim to solve. The below parameters have been identified through experimentation. Our results show that these parameters are reasonably robust across the instances. 72 Chapter 4 Efficient management of heterogeneous helicopter fleets

• Population size, Np - A large population ensures that more solutions are explored and therefore the results can be better. However, too large populations can slow down the algorithm and require a high amount of memory to be handled. For our experiments, we set a population minimum of 100 chromosomes, which is adjusted

for instances with more than 10 item types to Np = 10B.

• Proportion of elite individuals, ρe ∈ [0, 1] - This represents the proportion of chromosomes of the solution that are considered elite and always pass onto the next generation. Furthermore, elite individuals are more likely to pass their features onto the next generation of offspring. In our experiments we have used the value of

ρe = 0.2 in order to avoid keeping too many chromosomes from one generation to the next reducing the ability of the search to diversify. Note that non-dominated solutions are always passed to the next generation retaining solutions on the Pareto frontier.

• Proportion of crossover individuals, ρc ∈ [0, 1] - This is the proportion of individuals of the next generation that are created by crossover. A too large

value of ρc will cause that next generation to be very similar to the previous one, since we are just combining existing solutions, but a very low number will lose the potential of the crossover operator to generate better solutions from two

existing chromosomes. In our experiments we have found that ρc = 0.5 provides a reasonable compromise.

• Proportion of mutation individuals, ρm ∈ [0, 1] - This parameter is defined

as a consequence of the previous two parameters given that ρe + ρc + ρm = 1. The chromosomes generated by mutation might lead in principle to poor quality solu- tions, but help to explore different areas of the solution space which can potentially hold better solutions.

• Probability of elite inheritance, δe ∈ [0, 1] - In the crossover operator, one offspring is created combining features of an elite parent and another parent (that

may or may not be elite), δe is the probability of inheriting features of the fitter. This is typically higher than 0.5, as we want to favour elite features, but not too high, as we want to increase diversity in the population. For our experiments, we

have set δe = 0.7.

• Convergence parameter, C - The convergence of the algorithm is measured as the number of iterations where the best fitness value of the population is un- changed. If this number is greater than the parameter C, the algorithm stops.

• Maximum number of iterations, Imax - The maximum number of generations the algorithm is allowed to perform. Chapter 4 Efficient management of heterogeneous helicopter fleets 73

• Maximum time allowed, Tmax - We set a time limit to the algorithm and therefore it might stop before it converges or the maximum number of iterations allowed is performed.

4.2.7 Heterogeneous bins

For the MILP model and the constructive algorithm, we assume the existence of a vector u = (¯u1, u2, . . . , u|B|), which provides a feasible bin configuration, where the uj is a fixed value for j > 1 and the objective is to minimise the number of bins of type one. In this section, we propose an algorithm that iteratively modifies the value of the u vector in order to find all the non dominated bin configurations on the Pareto frontier. For each vector, we minimise the value of u1 using the MILP or CA.

We sort the helicopters types in ascending order of capacity using a lexicographic order, first weight, then seats, and then hooks, so the first helicopter type is the small- est. The packing algorithm is called with the vector u = (¯u1, u2, . . . , u|B|) as an input parameter. It will produce an assignment of items to helicopters and in particular, a ∗ new value u1, which is optimal if the exact algorithm solves the problem sucessfully or ∗ locally optimal if using the CA. Depending on whether the solution value for u1 is zero or not, indicates how the values of uj, j > 1 are modified for the next iteration.

∗ The algorithm starts with a non-dominated solution with configuration (u1, 0,..., 0) and iteratively adds more to the frontier. The number of bins is increased in such a way that the smallest amount is added to the total capacity of the fleet and the bin configuration is not dominated by a solution already in the pareto frontier. The following example with three bin types illustrates the algorithm:

∗ – Fixed bins (0, 0), algorithm finds u1 = 3, store non dominated solution (3, 0, 0)

∗ – Fixed bins (1, 0), algorithm finds u1 = 3, discard dominated solution (3, 1, 0)

∗ – Fixed bins (2, 0), algorithm finds u1 = 0, store non dominated solution (0, 2, 0)

∗ – Fixed bins (0, 1), algorithm finds u1 = 1, store non dominated solution (1, 0, 1)

∗ – Fixed bins (1, 1), algorithm finds u1 = 0, store non dominated solution (0, 1, 1)

∗ – Fixed bins (0, 2), algorithm finds u1 = 0, store non dominated solution (0, 0, 2)

– All bin types but last are 0, finished.

The process is described in more detail by the pseudo-code in Algorithm3. 74 Chapter 4 Efficient management of heterogeneous helicopter fleets

Algorithm 3 Finding the Pareto Front

1: Let u2 = . . . u|B| = 0 2: while True do 3: u¯1 = constructive(u2, . . . , u|B|) ∗ 4: u1 = solve packing((¯u1, u2, . . . , u|B|)) ∗ 5: if (u1, u2, . . . , uB) is non-dominated by previous configurations then 6: Store the solution. 7: end if ∗ 8: if u1 = 0 then 9: k = min{j : j ∈ B \{1}, uj 6= 0} 10: if j = B then 11: finished 12: else 13: uj = 0 14: u(j+1) = u(j+1) + 1 15: end if 16: else 17: u2 = u2 + 1 18: end if 19: end while

The order of the bin capacity speeds up the algorithm, as it is easier to enumerate larger bins, since there are less required to place the same amount of items. Note that even though the genetic algorithm does not use this algorithm directly to approximate the Pareto frontier, it is used by the constructive approach during the creation of the initial population. This ensures that all points of the Pareto frontier are investigated, even the extreme ones.

4.3 Computational experiments

This section is devoted to investigate the performance of the aforementioned methods. The algorithms have been implemented in C++ and were run in one node of the IRIDIS HPC facility with 16 cores of 2.6 GHz and 64 GB of memory. To ensure a fair comparison, the constructive algorithm is executed for the same time as the genetic algorithm on the instances, with random permutations of the input items. In the tables, we denote it by CA. The MILP from section 4.2.2 is solved with IBM R ILOG R CPLEX R 12.5.0.0.

In order to test the performance of the algorithms, we use two different types of datasets. The first is a collection of instances whose items characteristics are generated randomly, following the patters we identify in real data. Since real data is very sensitive in the military sector, we did not have access to Dstl data. Instead, we generated a realistic instance to test the suitability of of the algorithms for industrial problems. We based this instance on collections of items typically used in military missions, as available from the Federation of American Scientists website (https://fas.org/man/ Chapter 4 Efficient management of heterogeneous helicopter fleets 75 dod-101/army/unit/toe/). Since the volume and weight of items is not available on the website, we performed a manual online search for each item, which was later assessed for correctness by Dstl experts.

4.3.1 Randomly generated instances

To generate the random instances we have identified a number of features that are usually common in real data for these characteristics. We identified six basic types of objects. They are listed in Table 4.1. Negative values in the table indicate that a placement is

Table 4.1: Description of the items from the randomly generated instances

Category Weight range (kg) Seats range Hooks range Small cargo [10, 100] 0 0 Medium cargo [100, 150] [1, 5] 0 Heavy cargo [500, 1000] [−1, 10] 1 Light vehicle / equipment [100, 500] 0 1 Heavy vehicle / equipment [500, 1000] 0 1 Passenger [10, 100] 1 −1 not possible (eg. passengers cannot go outside the helicopter).

Based on these basic item descriptions, we have created two families of instances, homogeneous and heterogeneous. The homogeneous instances contain only six item types (a random realisation of each of the basic types), but each item is repeated a random number of times, depending on the size of the instance:

• Small homogeneous instance: between 5 and 50 items of each type.

• Medium homogeneous instance: between 51 and 100 items of each type.

• Large homogeneous instance: between 101 and 200 items of each type.

For the heterogeneous instances, the items are repeated randomly up to only five times, but they are all different. The total number of item types depends on the size of the instance:

• Small heterogeneous instance: between 5 and 50 item types.

• Medium heterogeneous instance: between 51 and 100 item types.

• Large heterogeneous instance: between 101 and 200 item types. 76 Chapter 4 Efficient management of heterogeneous helicopter fleets

For each family, we have generated 50 instances of each size, having in total 300 random instances.

The bins used are three helicopters currently in use by many air forces around the world, the A´erospatialePuma, the Boeing CH-47 Chinook and the AgustaWestland AW101 (Merlin). Table 4.2 describes their characteristics.

Table 4.2: Description of the bins used in the randomly generated instances

Chinook Puma Merlin Weight capacity (kg) 12065 3634 5630 Seat capacity 35 18 24 Hook capacity 3 1 1 Max. fuel weight (kg) 3065 1816 2814 Range (with full load, km) 520 842 1038 Range (with no load, km) 650 900 1320

In Table 4.3 we show the comparison between the genetic (GA) and the multistart constructive (CA) algorithms, each of them allowed to run for 10 min. In the table we report the percentage of instances in which each of them got the most dominated points, as well as how many times they had a tie.

Table 4.3: Multistart and genetic algorithm results comparison for random in- stances

CA (10 min) GA (10 min) Ties Heterogeneous Small 2.0% 4.0% 94.0% Medium 12.0% 50.0% 38.0% Large 16.0% 80.0% 4.0% Homogeneous Small 4.0% 44.0% 52.0% Medium 6.0% 78.0% 16.0% Large 8.7% 91.3% 0.0%

TOTAL 8.11% 57.43% 34.46%

While for small instances the ties are frequent and we can assume the algorithms prob- ably achieve optimal solutions, as instances get larger the gap between the algorithms also increases and for large instances the genetic algorithm is shown to perform much better, in both the heterogeneous and the homogeneous case.

For most cases, we have found that solving the ILP model in a reasonable time was not possible. Nevertheless, to show the comparison of the solution quality in terms of bins and, especially, on distance, we report in Table 4.4 the full results of solving one of the small homogeneous random instances (with 6 items types and 91 items in total) with the three methods. We adjusted the aircraft data, so they can travel 500 km and what is reported in the table is the extra distance they could fly. Both the Genetic algorithm Chapter 4 Efficient management of heterogeneous helicopter fleets 77 and the Constructive algorithm were allowed 1 minute, whereas the MILP model took 1539 s. in total. We set the absolute precision of the objective function to 10−7 and our initial experiments showed that, even for this small instance, solving to optimality some points of the Pareto front was impractical. For this reason, for these points we report the best solution found after 5 minutes and report the gap in brackets. These small gaps, that do not imply a change of unit in the objective function, only affect slightly to the distance. Table 4.4: Comparison of the MILP model, the genetic algorithm and the mul- tistart constructive for one homogeneous instance

Frontier point MILP (1539 s) GA (60 s) CA (60 s) # Chinook # Puma # Merlin Dist. # Merlin Dist. # Merlin Dist. 0 0 3 13.06 3 13.00 3 12.97 0 1 2 4.03∗ (Gap: 0.13%) 2 3.94 2 1.36 0 3 1 5.21∗ (Gap: 0.09%) 1 5.18 1 0.94 0 5 0 19.06∗ (Gap: 0.79%) 0 7.55 0 7.55 1 0 2 3.74 2 3.68 2 1.02 1 2 1 4.72 1 4.56 1 4.34 1 4 0 17.17∗ (Gap: 0.02%) 0 12.53 0 7.55 2 1 1 4.18 1 3.90 1 3.36 2 3 0 14.72 0 11.70 0 0.94 3 0 1 3.63 1 3.37 1 2.83 3 2 0 11.51∗ (Gap: 0.01%) 0 7.37 0 1.47 4 1 0 7.36 0 3.68 0 2.58 5 0 0 1.47 0 0.74 0 0.37

In this instance both the genetic algorithm and the multistart constructive can find the optimal number of aircraft in less than one minute, however, they struggle to find the optimal distance. In general, the genetic algorithm finds a better distance than the constructive. The reason for this is that, despite both of them using the same load balancing heuristic on their solutions, being population based, the genetic has more points in which the distance can be assessed.

4.3.2 Realistic instance

In this part we solve a realistic instance, which corresponds to the lifting of elements from an rifle company by two types of aircraft. The bins are the aircraft most used by the modern armies, the A´erospatialePuma and the Boeing CH-47 Chinook. These helicopters are a good example of the need of finding all feasible solutions, since they provide very different capabilities. We have assumed a minimum flight range of 500 km. for the two aircraft and the distance reported in the tables corresponds to the extra distance in which the fleet would be able to operate. 78 Chapter 4 Efficient management of heterogeneous helicopter fleets

Table 4.5: Comparison of the algorithms performance for a realistic mission simulating an rifle company lifted by different configurations of A´erospatiale Pumas and Boeing CH-47 Chinooks with a minimum flight range of 500 km.. The genetic algorithm was executed with a population of 100 individuals

Frontier point Lower bound GA (300 s.) CA (300 s.) # Chinooks # Pumas # Pumas Distance # Pumas Distance

0 41 41 7.64 41 9.81 1 38 38 0.04 39 9.81 2 36 36 0.01 37 0.12 3 34 35 0.04 35 0.12 4 32 33 0.04 33 0.12 5 30 30 0.12 31 0.12 6 28 28 0.12 29 0.12 7 26 26 0.12 27 0.12 8 24 24 0.04 25 0.12 9 22 22 0.12 23 0.12 10 20 20 0.10 22 0.12 11 18 18 0.12 20 0.12 12 16 16 0.12 18 0.12 13 13 13 0.01 16 0.12 14 11 11 0.01 14 0.12 15 9 9 0.01 11 0.12 16 7 7 0.01 9 0.12 17 5 5 0.01 7 0.12 18 3 3 0.12 5 0.12 19 1 1 0.01 4 0.12 20 0 0 0.12 2 0.12 21 0 – – 0 0.12

This instance contains around 800 items of 60 different types and we have found that CPLEX R was unable to find any feasible results before the time limit of one hour that we set, even for a single point of the Pareto frontier. Nevertheless, the genetic algorithm is able to find the optimal value for the bins ( the lower bound) for most of the points, outperforming the constructive.

4.4 Conclusions

In this chapter, we describe a new multi-objective one dimensional multiple bin size bin packing problem with multiple constraints. This is a real industry problem proposed by Chapter 4 Efficient management of heterogeneous helicopter fleets 79

Dstl, which aims to aid in the strategic management of helicopter fleets. The problem is modelled as a mixed integer linear program. A simple iterative procedure allows us to find the all the Pareto efficient combinations of bins in the optimal solutions. The well known first fit decreasing algorithm is adapted to tackle the packing aspect of the problem. The secondary objective, maximising the flight range of the fleet, is solved by means of the distance balance heuristic, which is a version of the well known load balance algorithm. A genetic algorithm is developed to improve the performance of the heuristic approach while being able to handle large instances. The results suggest that the genetic algorithm offers a good compromise between the accuracy of the MILP model and the speed of the constructive algorithm. Furthermore, it is able to approximate the whole Pareto frontier in one single run, providing a major saving of computational effort over the MILP model.

Chapter 5

An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex

Abstract The Codex Vergara is a document dating from the 16th century that was written by the Acolh´ua people, natives from the valley of Mexico. It contains a detailed census of the population, accompanied with a detailed register of the agricultural land that they owned. Previous studies of this document were able to identify the area where these terrains were located and measure it with the help of satellite imagery, as well as decipher the arithmetic used to describe the measurements and areas of the terrains. In this work, we continue these studies and propose to use cutting and packing methodology to find a range of possible reconstructions of the actual layout in which the terrains where located. To deal with the uncertainties arising during the deciphering, our algorithm has the flexibility to select shapes from a set of different possibilities. Furthermore, we allow overlap in the final layouts to acknowledge the inaccuracies in the data provided. In order to find these layouts, we provide a novel constructive algorithm, a local search procedure and a genetic algorithm that allow the pieces to rotate freely. With these tools, we are able to provide a range of high quality layouts for the original problem. To validate our method, we also solve successfully two standard problems from the 2D irregular packing literature whose solution is known to have full utilisation.

Keywords: 2D Irregular packing, Metaheuristics, Archaeology

81 Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 82 Aztec codex 5.1 Introduction

In this chapter we consider a cutting and packing problem that arises during the deci- phering of an ancient Aztec codex. The codex, dating from the 16th century, contains information about the households and agricultural terrains of a mesoamerican society located in present Mexico. While its contents and arithmetic are now well understood, there are still open questions regarding the specific location of the terrains within the geographical area that contained them. In other words, we have information about the shape of the terrains and about the shape of the place where they were, but not about what the actual map of the area looked like. We aim to use cutting and packing methodology to investigate what these positions might have been.

In the remainder of this section we provide a brief reference about the historical context and the arithmetic used in the codex, as well as a formal description of the problem and related work in the literature. In Section 5.2 we present a metaheuristic approach to find solutions to the problem. We discuss the implementation details and computational results in Section 5.3 and give our concluding remarks in Section 5.4.

5.1.1 Historical context

In the mid 16th century the Valley of Mexico was under the rule of the Spanish, who demanded the locals to pay tributes to the local lords, encomenderos. A few kilometres north-east of Mexico City, near to the current city of Texcoco, the local people, Acolh´ua were especially pressured by the enconmendero Gonzalo de Salazar. As it can be un- derstood from the records of that time, including the pictorial C´odice de Tepetlaoztoc (or Codex Kingsborough)1, the Acolh´ua had started a litigation against him to relieve pressure on the tax. As part of this litigation, they produced some documents describ- ing the census of the population and a registry of their land (Williams & Hicks, 2011). Two of these documents have survived to our days, the Codex of Vergara and C´odice de Santa Maria Asunci´on. Located at the Biblioth`equeNationale de Paris and Biblioteca Nacional de M´exico re- spectively, they are reasonably well preserved and illustrate the arithmetic knowledge of the Acolh´ua. They have been analysed in detail (for example, in the commented facsimile edition of the Codex of Vergara, published by Williams & Hicks(2011)), and its arithmetic is well understood. In this work, we are concerned specifically with the Codex of Vergara. It is composed of three complementary parts. In the first part, tla- catlacuiloli, we can find a description of the census of the families in each household. On the second part, milcocoli, there are drawings describing the terrains of land owned by each household, including side measurements. In the third part, tlahuelmantli the same

1http://www.britishmuseum.org/research/collection_online/collection_object_details. aspx?assetId=260929001&objectId=662793&partId=1 Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 83 terrains of land are depicted, but this time including a calculation of the area for each terrain (Harvey & Williams, 1980).

5.1.1.1 Acolh´uaarithmetic and geometry

In the codices, the shapes have been drawn featuring mostly right angles and ignoring the proportions of the sides. This poses a problem in order to be able to reconstruct the actual shapes of the terrains, as there is no angle information, other than the one that can be interpreted from the drawings. We illustrate one example of this issue in Figure 5.1.

Figure 5.1: Example of one terrain from the codex. The original drawing (left) and the same drawing with sides to scale (right). The measurements of the sides are in T lalcuahuitl, that are equivalent to 2.5m. Sticks represent 1 unit, dots 20 units and arrows 0.5 units. Groups of 5 sticks are connected on the top (Williams & Jorge y Jorge, 2008).

Luckily, the presence of the area calculations in the tlahuelmantli section, makes it possible to reconstruct the shapes, sometimes in an exact way. Using the information of the sides and the reported area in the codex, in Williams & Jorge y Jorge(2008) they analysed the potential area calculation algorithms that might have been used by the Acolh´ua.In a later work, the deciphering and the possible errors made in the original calculations were revisited. The authors were able to develop tools to construct one or more shapes for each terrain, keeping the original length of sides and preserving the area of the shape, as well as its concavities or convexities (Jorge y Jorge et al., 2011). For example, for the terrain in Figure 5.1 they found two possible representations, that we show in Figure 5.2.

Note that, even if it might look like these shapes are a mirror image from each other, they are not, as doing so would alter the side order from the original representation in the codex. For this work, we were kindly provided with the shapes obtained with this Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 84 Aztec codex

Figure 5.2: The two possible shapes of the terrain from Figure 5.1 when the area information is taken into account. method by the authors of (Jorge y Jorge et al., 2011), and these are the ones we use in our algorithm.

5.1.1.2 Geographical location of the terrains

The codex contains loose descriptions of where the terrains and households might have been located. One of the locations, Topotitla, has been identified as corresponding to the modern territory of El Topote (Williams & Harvey, 1988). Its current borders have been measured using Google Earth in Jorge y Jorge et al. (2011). In this work, the authors found out that the total area of the agricultural terrains recorded in the Codex of Vergara that correspond to the location of Topotitla is 9.5% greater than the one measured for modern El Topote. Some of the possible causes range from simple measurement errors (both in the measurements from Jorge y Jorge et al. (2011) and in the ones from the codex) to errors in the calculation of areas due to inaccurate algorithms (some of them were also identified in the same work) or to possible changes in the geographical landscape. In Figure ?? we show an aerial image of the site.

El Topote has a triangular shape, bounded by a path, a river and an ancient wall. Both the path and the river might also have suffered small changes over the years, reshaping the location and introducing further uncertainty.

5.1.2 Problem description

The motivation of this work is to find a number of potential layouts depicting the specific position of the agricultural terrains shown in the codex within the location of Topotitla. This entails determining which of the deciphered shapes might have been the original terrain and determining their possible rotation and geographical position. However, if we formulate it as a -solving problem, the problem is infeasible because of the Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 85

Figure 5.3: Aerial image of El Topote. Map data: INEGI, Google, DigitalGlobe 2018. area mismatch between the terrains assigned to Topotitla and the size of El Topote. To overcome this, we have chosen not to modify the deciphered shapes and locate them within the designated area allowing some overlap. This allows us to still understand how the terrains where located and their relative positions, despite having some inaccuracies regarding the original shape. We argue that these inaccuracies are not an obstacle in understanding the big picture of how the area was used and that attempting to modify the shapes themselves would not bring a major gain in this regard.

Before explaining the details of our methodology, let us introduce some formal no- tation. The data for our problem is formed by a geographical location (the container, as determined by satellite imagery) and a set of n terrains (as determined from the codex), that are represented by m distinct shapes, m ≥ n (generated during the deci-

phering). The shapes, s1, . . . , sm are represented by simple polygons. The set of shape indices, S = {1, . . . , m} is partitioned in n subsets, one for each terrain, that we denote

by P1,...,Pn. These sets contain the indices of the various shapes that represent one

terrain. For example, if P1 = {1, 2}, terrain 1 can be represented either by the shape s1

or the shape s2.

To represent a feasible layout, we need to place one shape from each subset somewhere

on the plane. Placing a shape si means giving it a rotation and a location, that is, a

rotation angle θi and a location point (xi, yi). To do so, the original polygon si is first

rotated by θi and then displaced such that its reference point coincides with (xi, yi). Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 86 Aztec codex

In our implementation, we set the reference point as one of the vertices of the piece, but this choice is irrelevant as long as it is fixed throughout the algorithm. We use the notation si(xi, yi, θi) to represent the polygon resulting from the placement of si with rotation θi and location (xi, yi), or, alternatively, we say that the polygon si is placed at (xi, yi, θi).

Finally, if we introduce a set of variables qi that are 1 if the shape i is used and 0 otherwise, we can describe a feasible layout by values of the variables qi, xi, yi and

θi, i = 1, . . . , m. The layout will be feasible if they the following conditions hold:

X qi = 1 ∀j ∈ 1, . . . , n (5.1)

i∈Pj

qi ∈ {0, 1} ∀i ∈ S (5.2)

xi, yi ∈ R ∀i ∈ S (5.3)

θi ∈ [0, 360) ∀i ∈ S (5.4)

Note that these equations do not conform an integer programming model to solve the problem. Furthermore, they do not include any restrictions on overlap or containment, as feasible layouts can contain overlap or pieces that are not completely within the container. Nevertheless, ‘high quality’ solutions must cover most of the container and have little overlap between pieces. Both, the covering of the container and the overlap between pieces can be quantified for a given placement. Their sum is given by the following formula:

X  \  f(q1, x1, y1, σ1, . . . , qm, xm, ym, σm) = qiarea si(xi, yi, θi) C − (5.5) i∈S X X  \  qiqjarea si(xi, yi, θi) sj(xj, yj, θj) i∈S j∈S j>i

The first term of the function from equation (5.5) quantifies the overlap of the pieces with the container (hence, the containment). The second part accounts for the overlap of the pairs of pieces, hence reducing the overall overlap in the solution, whether this happens inside or outside the container.

Note that we are not necessarily interested in the global maximum of (5.5), just in a range of layouts with a high value for it (note that we can determine what a high value is, since this function is bounded by the area of the container). For this reason, we develop a metaheuristic algorithm in Section 5.2, that is able to find different solutions Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 87 in different runs, each of them with a high value for (5.5) while subject to the conditions (5.2)-(5.4).

5.1.3 Related work

To the best of our knowledge the problem we are tackling is new to the literature, as it relaxes the two main constraints that define packing problems, non-overlapping and containment. In fact, this problem type is not captured by W¨ascher’s typology (W¨ascher et al., 2007), since the objective does not fall in either category, input minimisation or output maximisation. Furthermore, most publications in irregular 2D packing do not consider free rotation of the pieces, as it is usually restricted to a single or a finite set. We refer to Section 2.3.2 for a review on the state-of-the-art 2D irregular packing literature. The publications more relevant to this work are those that allow unrestricted rotation or, at least, a large number of rotations. The algorithm HAPE from Liu & Ye (2011) falls in the latter category. It uses the principle of minimising potential energy to construct solutions, and takes advantage of a discretised space to be able to test many rotations at each point.

One research avenue to deal with free rotation are phi-objects. This approach has support for curved objects and formulates the packing as a non-linear program. See for example Stoyan et al. (2016c) for a recent example for packing irregular shapes in circular and rectangular containers.

If we restrict ourselves to polygonal representations, Nielsen(2007) developed a local search algorithm based on translations of the pieces over an initial layout. This work can be seen as an extension of Egeblad et al. (2007), where they use simple translations along the axis, but Nielsen(2007) allows the pieces to rotate and move in different directions. The rotations and directions are chosen based on the position of other pieces, aiming to keep their edges aligned.

Liao et al. (2016) proposes a simulation-based algorithm, where pieces are moved and rotated simulating the forces they would suffer if they were enclosed by a rubber band. More recently, Martinez-Sykora et al. (2017) proposed a matheuristic algorithm for bin packing problems involving irregular polygons. The packing part is dealt with by using MIP-based constructive algorithm, that places pieces one at a time. The rotations are chosen from a finite set generated for each piece. This set contains a number of ‘promising’ rotations that are calculated by analysing the current layout and finding edge matches.

Finally, Abeysooriya et al. (2018) developed an algorithm for the two-dimensional bin packing problem, with irregular pieces and free orientation. Their approach consists of two parts, a constructive algorithm and a local search. The constructive algorithm places pieces considering first a set of finite rotations, and then examines some edges of Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 88 Aztec codex a partial solution to find the best orientation. The local search is an adapted version of the Jostle procedure (originally from Dowsland et al. (1998)), that retrieves the position of the pieces in the resulting layout and packs them in a reverse order, aiming to further improve the solution.

5.2 Solution methods

In this section we present the methods we have developed to find layouts as described in Section 5.1.2. We describe first a constructive algorithm that can generate feasible layouts based only on a piece sequence and a parameter α, that influences its objective function. We also describe a local search that can improve these solutions, by applying certain rotations and movements to the pieces that might not be reachable using the constructive algorithm alone. Finally, we describe a genetic algorithm that combines these two elements.

5.2.1 Constructive algorithm

We developed a constructive algorithm that places shapes one at a time, based on a piece sequence. We denote this sequence by a vector o with m elements. This vector can be any permutation of (1, . . . , m) and its choice will have a great impact on the resulting layout. However, for the purposes of the constructive algorithm, we are not yet concerned about how to choose this sequence, as this matter will be discussed later on, in Section 5.2.3. Let us recall the fact that we have m shapes, but only need to place n of them because of the alternative constructions we have for each terrain. We perform the selection of shapes using the sequence o as well, simply by placing only the first shape that appears in the sequence for each terrain, and ignoring the subsequent ones.

Let us now consider the actual placement of a shape si, i ∈ o. By placement we mean finding a rotation and a position for the piece within the incumbent layout, i.e., finding suitable values for xi, yi and θi such that at the end of the process we have found a solution that yields a good value for the objective described in equation (5.5). To find a promising placement, the algorithm tests a finite number of possible positions and rotations within the incumbent layout, and chooses the most suitable one. Therefore, there are two matters to consider: finding this finite set of positions and rotations where the shape will be tested and determining how to evaluate the suitability of a each placement. Note we cannot use the objective function from (5.5) for the latter, as not all the shapes have been assigned a position during the intermediate steps of the algorithm. We investigate these two matters in the following paragraphs. Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 89

Finding potential placements Due to the nature of the problem, if we had no uncertainty on the shapes of the terrains or in the shape of the geographical location, the reconstructed layout should look like or be very close to, a perfect fit, full utilisation solution. Unfortunately, this is not the case. However, we can still use some of the properties of this kind of layouts to guide our search during the construction of a solution. In a layout with a perfect fit and 100% utilisation, all the vertices of the pieces must coincide with at least another vertex from another piece or from the container. Furthermore, the pieces must have a rotation such that all their edges are aligned with at least one edge from another piece or the container. We use these two rules to generate a set of potential placements to test for each piece.

Let us consider the placement of the first shape, s1 in the empty container, C. For each vertex v from the shape (and, likewise, for the container), we can identify two adjacent edges, that we denote as the preceding and the succeeding edges. The preceding edge is the only edge whose ending vertex is v and the succeeding edge is the only edge whose starting vertex is v. This is illustrated in Figure 5.4 (a).

For each pair of vertices, v from s1 and w from C, we consider two test placements.

In the first one, s1 is rotated such that the preceding edge of v matches, in angle and orientation, the preceding edge of w and, in second one, s1 is rotated such that the succeeding edge of v matches the succeeding edge of w. The reference point is chosen in both placements to be such that v = w after the piece is rotated. Figure 5.4 (b) and (c) illustrate the two placements we consider for a vertex pair.

Figure 5.4: Example of the two test placements for a pair of vertices v from a shape and w from the container

Each test placement is determined by a position,x, ˜ y˜, and an angle, θ˜. The algorithm ranks all the possible placements according to a fitness function, g(s1,C, x,˜ y,˜ θ˜). For the moment, let us say that this function evaluates the placement based on the container area and perimeter resulting after the piece is placed. We talk more in detail about how this works in Section 5.2.1.1. The best position according to g is used to place the Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 90 Aztec codex shape i.e. we set q1 to 1 and set x1, y1 and θ1 to the values determined by the chosen placement.

Of course, the placement of a piece introduces new points and edges where the subse- quent pieces have to be tested. To take this into consideration, after the placement of a 0 shape si is fixed, we substitute the container polygon C by C , defined as the geometric 0 difference of the current container and the placed piece, C = cl(C ∩ si(xi, yi, θi)). Note that we need to consider the closure of the intersection, as to avoid working with an open set, which would not be a polygon.

In Figure 5.5 we illustrate the placement of the first piece in the original container and the second one in the updated container...... s1 s2 s1 s2 ......

C C'

Figure 5.5: The status of the layout after placing the first piece (left) and after placing the second piece on a vertex that is new in the updated container (right)

After the new container is determined, the algorithm continues placing shapes and updating the container until one shape for each terrain has been placed; at this point the layout is complete. we describe the full process in Algorithm4.

An important consideration is that the result of the intersection (Line 17 in Algorithm 4) might lead to a disjoint polygon (or, rather, a set of polygons) for C0. This would have happened, for example, in Figure 5.5 if shape s1 was slightly longer, effectively breaking the container in two parts. For simplicity, we use C to represent either a polygon (as in the first iteration) or a set of polygons (as might happen in later steps of the algorithm). This difference, however, does not change how the algorithm works. The only difference is that the polygons from the set need to be considered one at a time when calculating the test positions. The area (perimeter) of the container is calculated as the sum of the areas (perimeters) of the polygons in the set. Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 91

Algorithm 4 Constructive algorithm 1: o = {1, . . . , m} . or any other permutation of this set 2: q1 = q2 = ··· = qm = 0 3: xi, yi, θi = 0, i ∈ {1, . . . , m} 4: for all i ∈ O do 5: Let P ∈ {P1, ..., Pn} be the set such that i ∈ P P 6: if j∈P qj = 0 then . The terrain has not been placed yet 7: bestObj = −∞ 8: for all (˜x, y,˜ θ˜) test placement between si and C do 9: currentObj = g(si,C, x,˜ y,˜ θ˜) 10: if currentObj > bestObj then . Place si 11: bestObj = currentObj 12: qi = 1 13: (xi, yi, θi) = (˜x, y,˜ θ˜) 14: end if 15: end for 16: end if 17: C = C ∩ si(xi, yi, θi) . Update the shape of the container 18: end for

5.2.1.1 Alternative objective function

In equation (5.5) we have described a function that, if maximised, minimises the overlap area between pieces and maximises the covering of the container. This could be our objective function for the algorithm, however, to evaluate this function properly we would need to know the final positions for all the pieces. This is not the case as we are constructing the solution. If we concern ourselves only with placing the current piece minimising its overlap with previously placed pieces and with the outside of the container, we end up with the risk of placing pieces in an excessively greedy manner, not leaving any space for future pieces. To overcome this issue, we propose an alternative objective function that determines the quality of a piece position by examining the area and the perimeter of the updated container that would be generated if this position is accepted. Let C be the container at a certain point in the algorithm and s the shape we are considering to place in location (˜x, y,˜ θ˜), we define the alternative objective function as:

g(s, C, x,˜ y,˜ θ˜) = −(1 − α)perimeter(C0) − αarea(C0) (5.6)

Where C0 = C ∩ s(˜x, y,˜ θ˜). In other words, we calculate the shape C0 that would be the next updated container, if we choose this placement, and look at its area and perimeter with the aim of minimising both of them. We weight these choices with a parameter α ∈ [0, 1] to be able to balance the objective depending on the instance. In Figure 5.6 we illustrate different layouts generated by the algorithm, with pieces sorted by decreasing order and using the same shape and different α values. Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 92 Aztec codex

Figure 5.6: Layouts generated by the constructive algorithm for different α values. Top row, from left to right: α = 0.25, Ut = 42.4%; α = 0.50, Ut = 67.1% α = 0.75, Ut = 85.7%. Bottom row, from left to right: α = 0.85, Ut = 96%; α = 0.95, Ut = 99.1% α = 1, Ut = 97.3%

Introducing an alternative measure to evaluate the placement of a piece, other than the overlap area, also helps to break ties between positions that might yield equal amounts of overlap, but might have a different impact on the perimeter of the remaining free container.

This outcome of the constructive procedure has a strong dependence on the ordering of the shapes provided. This ordering impacts not only what shapes will be included for each piece (the ones that appear first on the list) but also the final layout. Equally, the value of α can have a dramatic impact in the layouts, even for the same piece orderings as we have seen in Figure 5.6. For this reason, finding sequences and α values in order to maximise the original objective function (5.5) is a major part of the heuristic search and the focus of the genetic algorithm that we will describe in Section 5.2.3.

5.2.2 Local search

The solutions generated by the constructive algorithm have vertex and edge matchings that, while providing a useful guideline for the search, might not be the best placements in a final solution. In order to avoid this property in the final layouts, we propose a simple first-improvement local search that aims to enhance the quality of the solution by moving and rotating the pieces within the layout. The algorithm randomly selects an overlapping piece and rotates it and moves it by a small random amount, accepting the movement only if it improves the overall quality of the solution. Here, the quality of the solution is regarded as improving the value of (5.5) Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 93 for the current layout. This process is repeated until a certain number of non-improving iterations is reached. We describe the process with pseudo-code of Algorithm5.

Algorithm 5 Local search

1: Let qi, xi, yi and θi, i = 1, . . . , m be a feasible solution 2: X  \  X X  \  F itness = qiarea si(xi, yi, θi) C − qiqjarea si(xi, yi, θi) sj(xj, yj, θj) i∈S i∈S j∈S j>i

3: iter = 0 4: non improving = 0 5: while iter < Max iterations and niter < Max niter iterations do 6: iter = iter + 1 7: Randomly choose i from {1, . . . , m} 8: if qi 6= 0 and si(xi, yi, θi) is in an overlapping position then 9: Randomly choose ∆x ∈ U[−a, a], ∆y ∈ U[−a, a] and ∆θ ∈ U[−b, b] ˜ 10: (˜x, y,˜ θ) = (xi + ∆x, yi + ∆y, θi + ∆θ) 11: pastContr =  \  X  \  area si(xi, yi, θi) C − qjarea si(xi, yi, θi) sj(xj, yj, θj) j∈S j6=i

12: newContr =  \  X  \  area si(˜x, y,˜ θ˜) C − qjarea si(˜x, y,˜ θ˜) sj(xj, yj, θj) j∈S j6=i

13: propF itness = F itness − pastContr + newContr 14: if propF itness > F itness then . Accept movement 15: (xi, yi, θi) = (˜x, y,˜ θ˜) 16: F itness = propF itness 17: niter = 0 18: else 19: niter = niter + 1 20: end if 21: end if 22: end while

Note that the calculation of the proposedF itness in line 13 yields the same measure of fitness as equation (5.5), but more efficiently, as we take advantage of the previously calculated summation terms that do not change as they do not involve the piece that has been moved. The pieces are rotated in an interval of [−b, b] degrees and moved in a square with sides of size a. In our experiments, we found that the best results were obtained by using b = a = 5. The bounding box of the geographic area is of 612 × 452 units, so the local search moves the pieces in a relatively small box. Together with a maximum number of 500 iterations where no improvement is found, it lets the pieces accommodate in positions with enhanced values for the objective function. Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 94 Aztec codex

It is important to note that the purpose of the local search is not to produce a broad search, where piece orientations, shapes and relative positions are set. Instead, it aims to produce a narrow search, where the shape of the pieces does not change (they have been previously selected in the sequence) and they stay close to their original position; but perhaps with their positions no longer matching vertices and their orientations not matching edges of other pieces, overall improving the quality of the solution.In Figure 5.7 we show an example of a solution before and after applying the local search.

Figure 5.7: Layout generated by the constructive algorithm on the left (Ut = 97.3), and the resulting layout after applying local search on the right (Ut = 98.5). Local search stopped after 1000 non improving iterations and had values a = b = 5.

5.2.3 Genetic algorithm

In order to take advantage of the constructive algorithm and the local search, we decided to implement a biased random-key genetic algorithm (BRKGA). Genetic algorithms are a well-established metaheuristic that has been used in the past in many combinatorial optimisation problems, and they lend themselves particularly well for sequencing prob- lems (Bean, 1994). We will take advantage of this feature to find good packing sequences for our constructive algorithm; but also include our local search during the search.

In brief, a genetic algorithm starts by generating an initial population made up of a set of chromosomes. Chromosomes are representations of a solution encoded in a certain manner, a random key in our case, as we will explain in the next sections. The chromosomes are evaluated and sorted according to a fitness function. After they have been sorted, the top individuals, called the elite, are combined with others from the population to generate new chromosomes that inherit some of their properties. This process is called crossover. The next generation is composed of the elite chromosomes, some generated by crossover and some more generated by mutation or, as in our case, randomly. The algorithm performs this search for a number of iterations and ends up with an optimised population. In the following sections we detail how we implement Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 95 the encoding and decoding (Section 5.2.3.1), fitness function (Section 5.2.3.2), crossover (Section 5.2.3.3) and mutation (Section 5.2.3.4).

5.2.3.1 Encoding & decoding

The chromosomes contain, in essence, the input data for the constructive algorithm. This means that we need to represent a shape order (which includes the selection of shapes, as we have noted in Section 5.2.1) and the parameter α. Representing a piece order is not new in the packing literature; it is in fact one of the two main strategies used in irregular packing problems (Bennell & Oliveira, 2009). Examples of this tech- nique can be found, for example in Gomes & Oliveira(2002), Bennell & Song(2010) or Gon¸calves & Resende(2013). For our algorithm we have chosen to use a random key encoding very similar to the one used in Gon¸calves & Resende(2013), since it can provide us with standard crossover and mutation operators, as well as ensuring that the solutions generated are always feasible.

For our algorithm, the encoding is a the random key, which is a vector k = (k1, . . . , km+1), where its elements are between zero and one, i.e., ki ∈ [0, 1], i = 1, . . . , m + 1. The de- coding uses these values to find parameters for the constructive algorithm as follows:

• The first m values are used to generate the packing sequence. They are sorted in ascending order and their resulting sorted indices are used to order the pieces, in

such a way that if ki is assigned to position j, the shape si will be considered by the constructive algorithm in the jth position.

• The last element is used to decode the parameter α. We define an interval [α−, α+] where α can take values and decode it as:

+ − − α = km+1(α − α ) + α (5.7)

In our experiments, we chose this interval to be [0.85, 1], as we found that values under 0.85 yielded poor quality solutions (recall, for instance, the layout generated by α = 0.85 in Figure 5.6).

With the value for α and a sequence of pieces ready, the decoding finds initial values for qi, xi, yi and θi, i = 1, . . . , m using the constructive algorithm. In the last step of the decoding, we apply the local search from Section 5.2.2 to enhance the fitness. Note that these modified values influence the decoded solution and its fitness value. However, the final placements of the pieces are not reflected by the encoded values and therefore will not be transferred during crossover. Still, incorporating this information in the fitness is useful, since we evaluate the potential of a certain encoding to produce a high quality solution upon decoding. Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 96 Aztec codex

5.2.3.2 Fitness

A decoded solution is a full solution so we are able to evaluate the fitness with the measure of overlapping area from equation (5.5). The fitness function allows to sort the chromosomes by quality and divide between elite and non-elite individuals. In our experiments, we keep the elite individuals to be around the top 25% individuals from the population.

5.2.3.3 Crossover

The crossover takes place always between one elite chromosome and another chromo- some selected randomly from the rest of the population (so it might be elite as well). When the selection in a random-key genetic algorithm is performed ensuring that one of the parents is an elite chromosome, the algorithm is called biased random-key genetic algorithm (Gon¸calves & Resende, 2011). Once two chromosomes have been selected, with keys ke (elite) and kn (non-elite), the random key of the offspring, ko is generated by copying elements from ke with probability ρe or from kn with probability 1 − ρe.

Once the key ko has been completed, the chromosome can be decoded and its fitness evaluated in the usual manner. It is possible that the crossover of two chromosomes generates an offspring that is iden- tical to one of its parents. This can happen either if the value of ρe is very high or if the parents were very similar. This situation is not beneficial, as we might start converging towards populations where all the chromosomes are very similar, gradually slowing down the search for new solutions. When it happens, we substitute a random element of the key ko by a random number in U[0, 1] and the decode offspring again.

5.2.3.4 Mutation

For the mutation we generate chromosomes randomly. Our encoding facilitates this, as we only need to generate a random vector with m + 1 values in U[0, 1] that then are decoded to generate random solutions.

5.3 Implementation and Computational results

In this section we show the results obtained by our methods. First, we test the capa- bilities of our algorithm by solving some standard nesting instances (Section 5.3.1) for which the optimal solution is known. Then, in Section 5.3.2, we present the results we obtain for the original problem for the codex. All of our algorithms have been imple- mented in C++ and compiled with the Intel compiler ICC 17.0.0. Our experiments Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 97 run in one core with 2.6 GHz and 4 GB of memory from the IRIDIS HPC facility. For the geometric algorithms (intersections, area calculations, etc.) we used the geometry package of Boost C++ libraries 2. A more detailed description on how the geometric algorithms work is given in Section 3.1.2.1.

5.3.1 Irregular strip packing instances

In order to test our algorithm, we have identified five instances from the two-dimensional irregular packing literature whose solution is known to have 100% utilisation. These instances have been designed to have a perfect match with the given orientation of the pieces and there is not uncertainty in the shapes. Keeping this in mind, we removed the local search step of our genetic algorithm to solve them, as it was precisely designed to tackle this uncertainty by finding rotation angles that are not edge matchings, something that is not necessary for these instances.

The instances we solved in this category are Dighe1 and Dighe2, originally presented in Dighe & Jakiela(1995), and the instances glass1, glass2 and glass3, from Fischetti & Luzzi(2009). We solved these instances with a population of 50 individuals (12 elite, 25 from crossover and 13 from mutation) and ρe = 0.8. We report the results in Table 6.5 and the layouts in Figure 5.8 and Figure 5.9.

Table 5.1: Results obtained for instances of 2D irregular packing literature

Instance Container Pieces Iterations Overlap Dighe 1 100 × 100 16 141 < 0.01% Dighe 2 100 × 100 10 2 < 0.01% Glass 1 45 × 45 5 1 < 0.01% Glass 2 45 × 45 7 1 < 0.01% Glass 3 100 × 100 9 1 < 0.01%

The algorithm is able to retrieve the full utilisation solutions in a few iterations. Furthermore, different runs of the algorithm result in different full utilisation solutions, rotated by 90 degrees.

5.3.2 Real instance

The instance for the location of Topotitla contains 38 distinct terrains, totalling 54 shapes, that range from 4 to 12 vertices. We used the same parameters for the BRKGA as in the previous section, but we have included the local search step. The full parameter list is reported in Table 5.2.

While the genetic algorithm is population-based, after some generations the popula- tion tends to converge towards a particular high-quality solution and contains mostly

2http://www.boost.org/ Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 98 Aztec codex

Figure 5.8: The resulting layout after running the BRKGA for the instances Dighe1 (left) and Dighe2 (right)

Figure 5.9: The resulting layout after running the BRKGA for the instances Glass1 (left), Glass2 (centre) and Glass3 (right)

Table 5.2: Parameters used for solving the Topotitla instance

Genetic algorithm Population size 50 Elite individuals 12 Crossover individuals 25 Probability of inheriting from elite parent 0.8 Number of generations 100

Local search: Max. non-improving iterations 500 Max. number of iterations 50000 Max. displacement 3 Max. rotation angle 5 variations of it. Since our aim is to find a range of different solutions with high quality, we performed 100 independent runs of 100 generations each, all of them starting from Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 99 a different random seed. The best of these runs achieved an utilisation of 99.59% and the worst 99.13%. In Figure 5.10 we show a box plot that summarises the utilisations found.

Figure 5.10: Summary of the utilisations found by the GA across 100 different runs with 100 generations each

Despite all of the solutions sharing a very high utilisation, they still can differ signifi- cantly in the layout. From Figure 5.11 to 5.15 we show the layouts obtained by the five runs that obtained higher utilisations.

The results seem satisfactory in terms of achieving high utilisation of the container. However, further analysis would require input from anthropologists or archaeologists, who might be able to rule out or favour some solutions according to their knowledge of the ancient societies or their explorations on site.

5.4 Conclusion and further work

In this work we have used existing and new cutting and packing tools to tackle an open question within the archaeology community. We devised a new type of cutting and packing problem where the two main constraints, containment and overlap are relaxed.

We proposed a constructive algorithm with a novel objective function, followed by a local search and a biased random key genetic algorithm that, combined, were able Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient 100 Aztec codex

Figure 5.11: Solution found by the GA with utilisation of 99.5958%

Figure 5.12: Solution found Figure 5.13: Solution found by the GA with utilisation of by the GA with utilisation of 99.5911% 99.5734%

Figure 5.14: Solution found Figure 5.15: Solution found by the GA with utilisation of by the GA with utilisation of 99.5536% 99.5355% Chapter 5 An archaeological irregular packing problem: Packing to decipher an ancient Aztec codex 101 to provide successful results for some standard packing problems where the optimal solutions are known. For the location of Topotitla and the 38 terrains we were dealing with, we have found a range of possible layouts with over 99% utilisation. We believe that these findings provide a basis for discussion for archaeologists and anthropologists and is key to inform future research, on site or otherwise, for this settlement. Furthermore, these layouts also server as further evidence of the correctness of the shape deciphering and location identification carried out in Jorge y Jorge et al. (2011), as the shapes identified in these works can be arranged in layouts that can cover the geographical area with small errors.

Chapter 6

Voxel-Based 3D Irregular Packing

Abstract

In this work we address the 3D irregular packing problem where the aim is to place a set of irregular shapes in a container, while minimising the height of the container. This problem and its closely related variants, such as bin packing or knapsack problems, are of great practical importance, especially with the growth of additive manufacturing techniques in recent years; however, they have received little attention in the literature.

We focus on providing an approach that is able to deal efficiently with arbitrary objects with concavities and holes, as often appear in practice. We represent objects using voxels, the three-dimensional equivalent of pixels. In this discretised space we extend the concept of the no-fit polygon to three-dimensions. This enables us to provide an integer linear programming formulation for this problem.

In the second part of the work, we investigate metaheuristic approaches that can be used in a practical setting. To this end, we develop constructive and local search approaches and explore different neighbourhoods, some allowing overlap. These neighbourhoods are the building blocks of a Tabu Search and a Variable Neighbourhood Search algorithms that prove to be very competitive with previous literature. We test these approaches over a new benchmark set of instances. Some are randomly generated, some are taken from the literature and some represent realistic models from the additive manufacturing area. Our results show that our metaheuristic techniques can solve a wide variety of problems with reasonable quality in efficient times.

Keywords: 3D Irregular packing, Open dimension problem, Voxel, Heuristics

103 104 Chapter 6 Voxel-Based 3D Irregular Packing

6.1 Introduction

The three-dimensional irregular packing problem consists of the efficient placement of arbitrary three-dimensional objects within a designated volume without overlapping. The efficient placement can entail here different objectives, such as maximising the value of the items to pack or minimising the volume needed to pack them. As in the lower dimensional cases, there are many closely related variants of the problem, depending on what is the objective and what constraints are imposed.

In this chapter we examine the strip packing problem. We consider a single box- shaped container, with a fixed width and length but undetermined height, and a col- lection of irregular smaller items with fixed orientations. The objective is to place all the small items within the container in non-overlapping positions, while minimising its height. According to the typology in W¨ascher et al. (2007), this is an irregular three- dimensional open dimension problem.

The chapter is organised as follows. In Section 6.2 we review the relevant literature for this problem. In Section 6.3 we introduce the problem formally and discuss our geometric approach. In Section 6.4 we propose an integer programming model for the problem. Since this model is too complex for practical use, we also investigate metaheuristic approaches. In Section 6.5 we provide the building blocks for developing metaheuristic algorithms, including a constructive algorithm and different local search neighbourhoods. We explore two different approaches, working on packing sequences and working on the layouts from complete solutions. Based on these building blocks, we develop three different packing algorithms that are presented in Section 6.6. Finally, we describe our benchmark instances and discuss the results of our computational experiments in Section 6.7 and conclude in Section 6.8.

6.2 Literature review

Some of the earliest works in three-dimensional irregular shapes were motivated by the application of additive manufacturing. Additive manufacturing is the process of generating physical objects from computer designs, usually by adding layers of a certain material. It includes, among other techniques, 3D printing. If more than one object is printed, a packing software is key to ensure, among other considerations, that the layout generated to print the objects is feasible (i.e., there is no overlap between pieces, and they fit into the printing area). Furthermore, the resulting shape of the packing layouts plays an important role in the estimation of the cost and required time for additive manufacturing (Baumers et al., 2013) and therefore a good packing algorithm can result in a more cost-effective printing. Furthermore, related problems such as bin packing are also relevant. For example, in industrial settings where many items have to be printed Chapter 6 Voxel-Based 3D Irregular Packing 105 in batches, including more objects in the same builds might lead to a reduction of the number of batches needed to print them and hence a reduction in the overall cost.

With this application in mind, Ikonen et al. (1997) proposes a genetic algorithm to pack arbitrary pieces. Later on, Dickinson & Knopf(1998) develop a constructive algorithm for the same application that maximises the compactness of the packing at each step. While additive manufacturing is a very common application for irregular packing, there are a wide range of other applications. For example, Egeblad et al. (2010) presents a set of heuristics for a specific furniture industry knapsack problem, de Korte & Brouwers(2013) looks at particle packing and Cagan et al. (1998) shows applications in component layout optimisation. In Teng et al. (2001) they decompose the problem in two stages to allow for considering dynamic equilibrium constraints in satellite design optimisation.

While we refer to Chapter2 and in particular Section 2.4.2 for a review in the state- of-the-art methods for three-dimensional irregular packing, let us recall the four works tackling the open dimension problem that are more relevant to this chapter. The first two (Stoyan et al., 2004, 2005) make use of phi-objects and phi-functions in order to formulate a mathematical model. Despite using phi-objects, these works are restricted to convex (Stoyan et al., 2005) and non-convex (Stoyan et al., 2004) polyhedra. The models constructed cannot be solved to optimality, so approaches to find local optima are proposed.

The second two (Egeblad et al., 2009; Liu et al., 2015) use polygonal mesh and share the peculiarity of being extensions of algorithms for two-dimensional packing. Egeblad et al. (2009) propose a guided local search, where local movements involve movements in axis-aligned directions. They provide a procedure to quantify the overlap volume between polytopes, which is core to their metaheuristic. Finally, Liu et al. (2015) pro- poses a constructive algorithm that works around the idea of minimising the potential energy of the pieces packed. This is achieved by an advance-or-retreat method, that allows pieces to move and rotate by a finite set of angles. The authors hybridise this constructive algorithm with simulated annealing, to further improve packing quality.

6.3 Voxelised three-dimensional packing

In this section we introduce formally the optimisation problem we are solving, based on a discrete voxel space. We also discuss geometric considerations and introduce tools for handling the non-overlap constraints. 106 Chapter 6 Voxel-Based 3D Irregular Packing

6.3.1 Problem description

We have a set of n small items I = {p1, p2, . . . , pn} (the pieces) that need to be placed inside a large item C (the container). The container has a rectangular base and a variable height (z-coordinate) and our objective is to find a set of n placements 3 L = {l1, l2, . . . , ln} ⊂ Z for all the pieces in I ensuring that no two pieces overlap and that all pieces lay within the container, while minimising the container height. The orientation of the pieces is fixed, i.e. rotation is not allowed. As we mentioned earlier, this problem is classified in the typology W¨ascher et al. (2007) as a three-dimensional irregular open dimension problem. The equivalent two-dimensional problem is usually called Irregular Strip Packing Problem or Nesting in the literature.

6.3.2 Voxel representation

In order to represent the irregular pieces from I, we discretise their bounding box into small cubes which are called voxels and use a binary code to distinguish which ones are part of the piece and which ones are not. Having a rectangular shape (bounding box) as a support, enables us to encode this information as a binary matrix. For each piece, the voxel in the corner of its bounding box (corresponding to the origin in its local coordinate system) is called the reference point. To locate the pieces in the container, it is sufficient to determine the position of their reference points. In Figure 6.1 we show an example of a piece and its reference point.

Figure 6.1: Example of an irregular piece represented by voxels with its reference voxel highlighted

6.3.3 Constraint handling

There are two main constraints for the strip packing problem, containment and overlap. In this case, we also add the constraint of fixed orientation of the pieces. Since the containers we use are rectangular, the containment constraints are trivial. The reference Chapter 6 Voxel-Based 3D Irregular Packing 107 point of each piece has a maximum permitted x and y positions and also a maximum z, since we will limit the open dimension within the algorithm. These maximum values correspond to the size of the container in that dimension minus the length of the piece along that dimension. Due to its similarity with the inner-fit rectangle Moura & Oliveira

(2003), we denote the positions that lay inside the container for the piece pi ∈ I as the set IFVpi,C, the inner-fit voxel of the piece pi and the container C.

To test the overlap between pieces, we will define a similar tool, the no-fit voxel. In general, the discretised approach allows us to perform very quick intersection tests to identify overlap between pieces. Before detailing the no-fit voxel, let us describe a simple test in the following lines.

Given two pieces p, q ∈ I, with their reference points located at the points (pi, pj, pk) and (qi, qj, qk) respectively, we say that they are in a non-overlapping position if either:

• Their bounding boxes do not intersect, or

• in the intersection of their bounding boxes, no two voxels from the pieces coincide in the same position.

The second condition can be simply tested by checking the corresponding elements of the matrices that represent the voxels of the two pieces, at stopping at the first coincidence. Testing overlap in this order leads to an efficient implementation. If the pieces are fairly apart from each other (and this is true for many pairs of pieces in a typical packing layout) the first condition, very simple to test, will be sufficient to discard overlap between two pieces.

Building on this idea, it is trivial to see that if (pi, pj, pk) and (qi, qj, qk) lead to 3 feasible positions for p and q, then for any (αi, αj, αk) ∈ Z ,(pi + αi, pj + αj, pk + αk) and (qi + αi, qj + αj, qk + αk) also lead to feasible positions for p and q. In other words, a non-overlapping position is maintained as long as the relative position of the pieces does not change. Following this idea, we define the no-fit voxel for two pieces p and q as a set of the overlapping relative positions of the pieces p and q. More formally, if we denote the position of p by lp and the position of q by lq,

3 Definition 6.1. The No-fit voxel of p and q is a set NFVp,q ∈ Z with the property that, if lp = (0, 0, 0) and lq ∈ N FVp,q, p and q intersect.

In Figure 6.2 we illustrate the no-fit voxel of two irregular pieces.

If the no-fit voxel of two pieces is known, it can be used to determine if two pieces overlap. However, in Definition 6.1, we require that the location of p is fixed at the origin. Let us now consider the case where p is located at an arbitrary point (px, py, pz) 108 Chapter 6 Voxel-Based 3D Irregular Packing

Figure 6.2: Two arbitrary pieces, p and q and their no-fit voxel NFVp,q

and q is located at (qx, qy, qz). We can still use the no-fit voxel to test their overlap, they intersect only if (qx − px, qy − py, qz − pz) ∈ N FVp,q.

For simplicity, we introduce an alternative notation for this case. We write as NFVp,q(px, py, pz) the set of points where q would overlap with p if it was located at (px, py, pz). With this notation, p and q overlap if (qx, qy, qz) ∈ N FVp,q(px, py, pz).

Note that, if NFVp,q is known, it is trivial to calculate NFVq,p using the following property:

NFVq,p(i, j, k) = {−a : a ∈ N FVp,q(i, j, k)} (6.1)

The no-fit voxel can be seen as an adaptation of the no-fit polygon to a discretised space in three dimensions. The no-fit polygon (Art, 1966) is a concept that has been consistently used in 2D irregular problems since it was first developed. Informally, the no-fit polygon can be described as the set of points that would lead to overlap if one piece is located at the origin and the other one at one of the points of the no-fit polygon. as the resulting polygon generated by sliding one piece around another one, always in contact but not overlapping. A more in-depth definition and a review of various algorithms for its calculation can be found in Bennell & Oliveira(2008). It is worth noting that no-fit polygons have a close relationship with Minkowski sums. For two arbitrary polygons,

A and B, it can be shown that A ⊕ −B = NFPAB (See for example Milenkovic et al. (1992) or Bennell et al. (2000)). The no-fit voxels of an instance can be pre-calculated before the packing algorithm, checking the positions in which the bounding boxes of each pair of pieces intersect. To compute the no-fit voxel of two pieces p and q, NFVp,q, we can simply keep p fixed at the origin and perform an overlap test between p and q in all of their possible overlapping positions (i.e. the positions where the bounding boxes of p and q overlap). When a test is positive, the point is added to the NFVp,q and the process continues until all points are tested. Thanks to the property in equation (6.1), the calculation only needs to be done once Chapter 6 Voxel-Based 3D Irregular Packing 109 for each pair or piece types. Once the no-fit voxels are available, the overlap checks are merely a matter of checking if the reference point is in a set; as far as the orientation of the pieces remains unchanged.

6.4 ILP formulation

In this section we describe an integer linear programming model for the problem stated in Section 6.3.1. The model uses binary variables to determine the position of the pieces and defines constraints based on the no-fit voxel information to ensure non-overlapping.

For each piece p, we define a region, R(p) ⊆ IFVp,C where it can be placed. This region is a discrete set of voxels that are identified in the model by the following binary variables:  1, if p is placed in (i, j, k) xpijk = (6.2) 0, otherwise

We mean ‘placed’ in the sense that, if xpijk is set to 1, the reference point of the piece p is located at (i, j, k). As mentioned earlier, the reference point is chosen to be the corner or the bounding box with smallest coordinates.

The full model is as follows:

minimise H, (6.3)

subject to

X xpijk(hp + k) ≤ H ∀ p ∈ I (6.4) (i,j,k)∈R(p) X xpijk = 1 ∀p ∈ I (6.5) (i,j,k)∈R(p) X xpijk + xqlmn ≤ 1 ∀p, q ∈ I, p 6= q, ∀(i, j, k) ∈ R(p) (6.6) (l,m,n)∈ NFVp,q(i,j,k)∩R(q)

H ∈ N (6.7)

xpijk ∈ {0, 1} ∀p ∈ I, ∀(i, j, k) ∈ R(p) (6.8)

By hp we denote the height of a piece p. The first constraint (6.4) defines the value of the variable H, which is the total height of the container in voxels. In equation (6.5) 110 Chapter 6 Voxel-Based 3D Irregular Packing we make sure that all pieces are placed. Finally, equation (6.6) uses the information of the no-fit voxel of two pieces to ensure that there is no overlap between each pair of pieces. The key idea is that, for each position (i, j, k) where p can be placed, the terms on the sum represent all the points where pieces p and q would overlap. In principle, these would be the points of NFVp,q(i, j, k); but we concern ourselves only with the ones that are also part of R(q), as q cannot be placed outside of it. More importantly, the summation term is bounded by one (because of the constraints in (6.5)) and hence this constraint is valid also in the case that xpijk = 0. Note that we do not need to add constraints ensuring the containment of the pieces, as we define the regions R(p) in a way that prevents this.

This model is an extension of the Dotted Board Model from Toledo et al. (2013) for the two-dimensional problem and, to the best of our knowledge, it is the first integer linear programming model available for the discretised irregular strip packing problem in three dimensions.

If we define the regions R(p) to be R(p) = IFVp,C for all pieces, the optimal solution of the model would be the optimal solution of the discretised packing problem. Unfor- tunately, solving such a model requires great computational effort and it is impractical for reasonable sized instances.

However, we can solve a relaxation of the model by reducing the number of variables. We can do this by defining smaller regions for each piece to move (perturbations of a par- tial solution, for example). This technique, called compaction or separation depending on the objective, has been used successfully in the past in 2D irregular packing litera- ture in conjunction with metaheuristics (see for example Bennell & Dowsland(2001) or Gomes & Oliveira(2006)). With this model, the compaction consists in, given an initial solution, defining a region

R(p) for each piece p, substantially smaller than IFVp,C. There is a parallelism between the choice of size and shape of the boxes and the neighbourhoods of metaheuristic algo- rithms. We refer to Section 6.5.3 for a discussion on this. With the boxes defined, the next step is to solve the model (6.3)-(6.8) to optimality. If the height is reduced, the procedure has been successful. At this point, new R(p) regions can be defined again, with the aim to further reduce the overall height (H).

As a separation procedure, the initial solution must be infeasible (contain overlap). After defining the regions R(p), the model can be altered removing the constraints (6.4) and the objective function, as the goal is simply to find a feasible solution, disregarding the final height. This separation procedure is used as one of our neighbourhoods in the Variable Neighbourhood Search presented in section 6.6.3. Chapter 6 Voxel-Based 3D Irregular Packing 111

6.5 Building blocks of the 3D packing heuristics

In this section we examine a number of components that are the building blocks for the metaheuristic algorithms presented in Section 6.6. We first describe a constructive procedure to generate initial solutions. Since this algorithms works with both, placement rules and piece sequences, we introduce sequence-based neighbourhoods to exploit this fact. In the remainder of the section we cover the approaches needed in order to develop algorithms that are able to work with complete solutions, potentially handling overlap. We describe neighbourhoods that work on the layout of complete solutions, a strategic oscillation technique and a new objective function related to overlap. The two approaches that we investigate align with the strategies presented in Bennell & Oliveira(2009) for 2D irregular packing, working with partial solutions (sequence) and with complete solutions (layout).

6.5.1 Constructive algorithm

We propose a constructive algorithm (CA) based on a bottom-left-back strategy. The idea is to place the pieces one at a time, ensuring their placement minimises first the z coordinate, then the x and then the y. For a packing problem with a collection of items

I = {p1, p2, . . . , pn} and a container C, let S = {s1, s2, . . . , sn} be an ordered sequence of the pieces to be placed. This sequence could be a random permutation of I, or a specific sorting the pieces, for example by decreasing volume.

The algorithm starts by placing the first piece in the sequence, s1, at location l1 = (0, 0, 0). This placement is always feasible, because of our definition of the reference points and the coordinate system of the container. Then, the ith piece of the sequence, si, is placed at li = (x, y, z) such that the piece lays within the container, i.e. li ∈ IFVsi,C and it does not overlap with any other placed piece, i.e. li ∈/ N FVsj ,si (lj), ∀j < i. To

find the placement li, the algorithm tests points from IFVsi,C until it reaches a valid placement. If a piece of the same type has been placed earlier, this information can be used to skip testing the points of IFVsi,C where it had been tested, as they would not yield any valid position either.

The points are evaluated starting from the lowest z values, followed by the lowest x and the lowest y, so the first valid position found is the one that lays at the bottom-left- back-most possible position. This step can be modified easily if the x or y position are sought in reverse order (i.e. from highest value to lowest); in that case the algorithm rule would be called ‘right’ instead of ‘left’ or ‘front’ instead of ‘back’. In fact, each piece in the sequence can be packed using a different rule. To acknowledge this, we introduce the notation R = {ri} for a set of rules, where ri represents one of the following: ‘left-back’, ‘left-front’, ‘right-back’ or ‘right-front’. A solution of the algorithm is then given by 112 Chapter 6 Voxel-Based 3D Irregular Packing

L = CA(S, R), where pieces have been placed in the order determined for the sequence S and with the corresponding rule from R.

While the algorithm is deterministic, different sequences S and different placement rules R can result in different packing layouts. This opens the possibility to search over the sequence of pieces to be placed. We explore this possibility in the next section with the sequence-based neighbourhoods.

6.5.2 Sequence-based neighbourhoods

Searching over the sequence of pieces has been a common topic of research in two- dimensional irregular packing problems, see for example Gomes & Oliveira(2002) for a 2-exchange heuristic searching over the sequence of pieces given to the constructive procedure or Dowsland et al. (1998) and Abeysooriya et al. (2018) for a technique, called Jostle, that constructs solutions iteratively, changing the placement sequence used in the constructive based on the information extracted from the layout generated on the previous step.

A sequence-based neighbourhood of the solution L is a set of solutions, N(L) obtained by running the constructive algorithm with similar but different parameters. We identify the following neighbourhoods:

Sequence swap neighbourhood Consists in the solutions obtained by swapping two pieces of different type in the se- quence S. If L = CA(S, R) and S = {s1, si, . . . , sj, . . . , sn}, i < j

 0 0 NS(L) = {CA(S ,R)} : S = {. . . , sj, . . . , si,... }∀si, sj ∈ S, si 6= sj, i < j (6.9)

Rule change neighbourhood The rule change neighbourhood consists in changing of one rule for one piece in the packing sequence. More formally, if L = CA(S, R), then the neighbourhood is defined by  0 0 0 0 NR(L) = {CA(S, R )} : R = {. . . , ri−1, ri, ri+1,... } : ri ∈ R \ ri (6.10)

These two neighbourhoods are effectively creating different construction heuristics, and this space of solutions can be searched algorithmically. This kind of search is also referred to as hyper-heuristics, see for example Burke et al. (2010c); Terashima- Mar´ın et al. (2010) for examples in 2D irregular packing. In section 6.6.1 we describe a procedure to exploit this space of solutions with a simple metaheuristic algorithm. Chapter 6 Voxel-Based 3D Irregular Packing 113

6.5.3 Layout-based neighbourhoods

A layout-based neighbourhood of a solution L is a set of solutions N(L) that are reach- able from L by applying a perturbation or move. This movement is no longer in the sequence or in the set of rules, but a change of position of one or various pieces in the packing layout. As opposed to the sequence-based neighbourhoods, the solutions in these neighbourhoods might not always be feasible. We distinguish two types of layout-based neighbourhoods. The first type involve mov- ing a piece to a nearby position and are called single piece neighbourhoods. The second type involve more dramatic changes, such as swapping the positions of two pieces in the layout and are called full solution neighbourhoods. The reason for this distinction is that the single piece neighbourhoods are usually quicker to evaluate and provide solutions of similar quality to the current one, since only one piece is moved. However, the full solution neighbourhoods are necessary to create disruptions in the solution that help to escape local optima.

Axis aligned direction neighbourhood δ The axis aligned neighbourhood of a piece p, Naxis(L, p) includes all the solutions of the form L = {l1, l2 . . . ln} where the reference point of the piece p, lp, is substituted by another point from IFVp,C that is reachable by translating lp by a fixed amount of voxels between 0 and δ, in an axis aligned direction. More formally,

( δ 3 ) δ 0 0 [ [ \ Naxis(L, p) = {l1, . . . , lp−1, lp, lp+1,... } : lp ∈ {lp + dei} IFVp,C (6.11) d=−δ i=1

3 where ei denote the vectors of the standard basis in R . The parameter δ determines the number of voxels the reference point can be displaced. The special case where we consider all the points within the container in any axis aligned direction is denoted by

δ = δmax. See Figure 6.3 for an illustration of this neighbourhood.

Figure 6.3: Neighbourhood of axis aligned directions, δ = 1 (left) and δ > 1 (right). The red point is the original reference point lp and the grey points are the possible reference points in the neighbourhood.

Note that the neighbours of a feasible solution might be infeasible with respect to the overlap constraint, but they will not violate the containment constraint, as we impose that pieces can only move within their inner-fit voxel. 114 Chapter 6 Voxel-Based 3D Irregular Packing

Enclosing cube neighbourhood The enclosing cube neighbourhood contains all the valid points in a cube centred at the reference point and with sides of 2δ + 1 voxels, as shown in Figure 6.4. More formally,   δ  0 0 [ \  Ncube(L, p) = {l1, . . . , lp−1, lp, lp+1,... } : lp ∈ {lp + d} IFVp,C (6.12)  d∈{−δ,...,δ}3 

Figure 6.4: Neighbourhood of enclosing cube, δ = 1. The grey points are the reference points in the neighbours, while the original reference point is located in the centre of the cube.

For a large δ, this is equivalent to the complete inner-fit voxel of the piece. In this case, the neighbourhood can be seen as an insert move. We denote this situation by

δ = δmax.

Piece swap neighbourhood The swap neighbourhood of a piece is a full solution neighbourhood. It represents all the possible swaps the piece can make with others. Some of the swaps might violate the containment constraint, since the swapped pieces could end up with parts of them outside the container. If this happens, the piece is moved inwards along the axis (one (q) at a time) until the point is in its inner-fit voxel. This position, denoted by lp (the resulting position of p after swapping with q), is defined as follows,

(q) 0 lp = arg min klq − l k1 ∀p, q ∈ I; p 6= q (6.13) 0 l ∈IFVp,C

Then, the neighbourhood contains all the possible swaps of a piece p with others,

n (p) (q) o Nswap(L, p) = {. . . , lp−1, lq , lp+1, . . . , lq−1, lp , lq+1,... } : q ∈ I, q 6= p (6.14)

6.5.4 Strategic Oscillation

If we use layout-based neighbourhoods in our search algorithms, as we will explain in more detail in Section 6.6, intermediate solutions might contain overlap. This makes the problem effectively a bi-objective problem, since we want to both, minimise the height as well as ensure that there is no overlap. It is clear that solutions containing overlap are of no use, regardless of their height and therefore, our algorithms have their main Chapter 6 Voxel-Based 3D Irregular Packing 115 focus on completely removing the overlap, while the height is changed at certain steps during the course of the algorithm, depending on the solution quality. This is a common approach used in algorithms that allow overlap during intermediate solutions, such as Bennell & Dowsland(1999), Umetani et al. (2009), Imamichi et al. (2009) or Egeblad et al. (2009). This technique is called strategic oscillation and appears often in other problems, fre- quently used in conjunction with tabu search. It consists in defining a critical level around which the algorithm is forced to oscillate (Glover & Marti, 2011). We define the critical level naturally as the feasibility of a solution. This enables the algorithm to work only with infeasible solutions, while the height is optimised. In each iteration, we attempt to remove the overlap and, after that, the height is modified. If the solution has overlap, the height will increase by one voxel and the algorithm will try to resolve the overlap with more space. If it succeeds, the height is reduced by a certain percentage (10% in our computational experiments) and the algorithm is run again, with the aim to find non-overlapping positions at a lower height. As a result, the algorithm oscil- lates between feasible and infeasible solutions. Since the reduction of height leaves some pieces protruding above the top of the container, their position needs to be adjusted. This is done by removing these pieces from the layout and inserting them again, one at a time and in a random order, in the best position within the reduced layout. We find this position by evaluating a certain objective function (discussed in Section 6.5.5) when the piece is placed at each of the points of their inner-fit voxel. The processing time of reinserting pieces depends highly on the resolution of the instance, but it can be computationally costly. For low resolution instances, the time is reasonable since the height is not reduced so often within one run of the algorithm and there are fewer points to test. However, for high resolution instances we propose to simply insert randomly the protruding pieces within the new layout. While this is not as beneficial as the previous option, it still disrupts the layout enough to search for new solutions and we have found that offers a good trade-off in terms of computational effort and quality.

6.5.5 Objective function

While our objective is to minimise the height of the final layout, if we search over the layout and use strategic oscillation, intermediate solutions will have overlap. For this case, we propose an objective function whose main goal is to understand when a solution is better than another, in terms of leading to a non-overlapping solution. In the nesting literature, typical objective functions include measuring the penetration depth of a pair of pieces (Bennell & Dowsland, 1999; Umetani et al., 2009; Imamichi et al., 2009) or the exact amount of overlap (area in 2D or volume 3D) for each pair of pieces (Egeblad et al., 2009), among others. Our objective function is calculated based on the volume 116 Chapter 6 Voxel-Based 3D Irregular Packing of the overlap of the bounding boxes,

1 X V ol(B(i, li) ∩ B(j, lj)) F (I, l1, l2, . . . , ln) = 1 − n , (6.15) Vmax 2 i,j∈I,i

The constant Vmax = maxi∈I V ol(B(i)) is the maximum volume of any bounding box present in the instance and is used to scale the function value between 0 and 1. Dividing n the sum by 2 , the number of pairs of pieces in the solution, is used to average the overlap of the pieces.

This function is very quick to evaluate and provides an intuitive idea of ‘how difficult’ it is to separate pair of overlapping pieces is to separate.

6.6 Search algorithms

Using these building blocks, we propose three different metaheuristic algorithms. The first, an iterated local search, is based on searching over the piece sequence and the placement rules of the constructive algorithm. The remaining two, an iterated tabu search and a variable neighbourhood search, are based on searching over the layout of complete solutions. All three algorithms share the need for an initial solution that is calculated with our constructive algorithm (Section 6.5.1) as L0 = CA(S0,R0), where S0 is the sequence of pieces from the instance ordered by decreasing volume and R0 is the set of rules that assigns ‘left-back’ to all the pieces.

6.6.1 Iterated local search

Iterated local search is a metaheuristic algorithm that applies local search in every iteration to find local optima and then moves to a different part of the solution space by performing a shake, before applying local search again (Louren¸co et al., 2010). We apply this algorithm to take advantage of the different rules that can be used in our constructive algorithm. For the local search we use the rule change neighbourhood,

Nrc(L), that involves changing one of the two placement rules (left / right or back / front) of a piece and re-packing with the constructive algorithm. Since this change is performed to one piece at a time, the neighbourhood has a reasonable size and can be explored in full. Furthermore, since the pieces are placed in order, only the pieces after the one whose location changes need to be packed again, making the search more efficient. Chapter 6 Voxel-Based 3D Irregular Packing 117

After all the neighbours of the current solution have been explored, the one with the best objective (lowest height) is used as the next current solution. If none of the neighbours them can improve the best known solution, the local search stops and a shake is performed. The shakes consist in changing the order of the pieces and assigning random placement rules to the pieces, in a multi-start fashion. The algorithm steps are detailed in Algorithm6.

Algorithm 6 Iterated Local Search

1: Lstart = {l1, l2,... } ∗ ∗ 2: L = Llocal = Lstart ∗ 3: fitness = evaluate fitness(Lstart) 4: for i = 1 to MAX ITERATIONS do 5: f = f ∗ = 0 0 6: for L ∈ Nrc(Lstart) do . Test all neighbouring solutions L’ 0 7: f = fitness change(Lstart,L ) 8: if f > f ∗ then 9: f ∗ = f ∗ 0 10: Llocal = L 11: end if 12: end for 13: if f ∗ > fitness∗ then 14: fitness∗ = f ∗ ∗ ∗ 15: L = Llocal 16: end if 17: Lstart = shake(Lstart) 18: end for

While searching over the sequence can produce quick efficient layouts, the solution space does not contain all possible solutions and some good layouts might be unreachable using these techniques. To overcome this, we propose two other algorithms that work on the layout of full solutions.

6.6.2 Iterated Tabu Search

Tabu search (Glover, 1989) is a widely used metaheuristic that has been successfully applied to various combinatorial optimization problems, including irregular packing (Blazewicz et al., 1993; Bennell & Dowsland, 1999). The basic idea behind it is to test the full neighbourhood of a solution and move to the best neighbouring solution, whether it improves it or not. Once the move is done, it is added to a tabu list so it cannot be reversed for a certain number of moves. The usage of the tabu search for searching for non overlapping positions is very appeal- ing, since the voxel representation means that the neighbourhoods of the pieces are finite and can be evaluated in full with a low computational effort. Maintaining a tabu list is an effective way to prevent pieces moving back and forth in the same positions. 118 Chapter 6 Voxel-Based 3D Irregular Packing

The pseudo-code in Algorithm7 describes the tabu search:

Algorithm 7 Iterated Tabu Search ∗ 1: L = {l1, l2,... } ∗ 2: fitness = evaluate fitness(l1, l2,... ) 3: T = ∅ . Tabu list 4: for i = 1 to MAX ITERATIONS do 5: Choose randomly a piece p from the overlapping pieces 6: Define N(p) as the set of neighbouring points of p 7: f = f ∗ = 0 8: for l ∈ N(p) such that l ∈/ T do . Test all valid neighbours 9: f = fitness change(fitness, p, l, l1, l2,... ) 10: if f > f ∗ then 11: f ∗ = f 12: l∗ = l 13: end if 14: 15: end for ∗ 16: Add to beginning of the tabu list opposite move of(p, lp, l ) 17: Remove last element of the list if it exceeds TABU SIZE ∗ 18: lp = l 19: if f ∗ > fitness∗ then 20: fitness∗ = f ∗ ∗ 21: L = {l1, l2,... } 22: end if 23: if fitness∗ == 1 then . Overlap resolved 24: exit 25: end if 26: end for

A relevant detail on the implementation of the algorithm is the fact that after each movement performed by the tabu search it is not necessary to evaluate the objective function from equation (6.15). Instead, it is sufficient to perform an update to evaluate the impact of the change in the solution quality. This test is very quick to perform, since it only involves recalculating the overlap terms for the piece that has moved. The efficiency of this step allows for the testing of a large amount of movements in a reasonable computational time.

Based on its performance in our experiments, we have chosen the axis aligned neigh- bourhood with δmax, which is quick to evaluate but can still provide long distance move- ments, which are necessary to find dense layouts. When a movement is performed, the opposite direction of the movement for that piece is added to the tabu list; for example, if a piece moves to the right, the algorithm will forbid it to move to the left for a number of iterations. In order to chose the parameters of the algorithm, we performed some informal testing, which yielded good results for a tabu list of length 10. Intuitively, a short value of the length of the list would not be enough to prevent pieces cycling between the same positions, but a longer list could be too restrictive and, particularly Chapter 6 Voxel-Based 3D Irregular Packing 119 when only a few pieces overlap, it could result in them being ‘trapped’ against the sides of the container with no options to move. This is the value we use in our computational experiments.

In order to control the height, the Tabu Search runs in a strategic oscillation scheme. This means that it will act on a given solution for a fixed number movements (or until it removes the overlap) and then perform the oscillation. Again, from our informal testing 105 movements seemed a reasonable compromise between quality and speed, and is the value used in our computational experiments. Since tabu search is implemented together with the iterative procedure of strategic oscillation, we call the full algorithm Iterated Tabu Search.

6.6.3 Variable Neighbourhood Search

Having identified a series of neighbourhoods, it seems that the choice of only one of them in the Iterated Tabu Search procedure might be limited. To overcome this we have implemented a Variable Neighbourhood Search algorithm (VNS). This metaheuristic algorithm, introduced by Mladenovi´c& Hansen(1997), makes use of different neigh- bourhoods in turn, under the assumption that a local optimum for a neighbourhood might not be a local optimum in another one. VNS exhaustively explores a neighbourhood to find the best neighbouring solution and move to it. In our implementation, we apply a steepest descent rule, and therefore it tests all the neighbours before it moves to the best improving solution. If the current solution has no improving neighbour, it is labelled as a local optima for the current neighbourhood. When this happens, the algorithm switches to the next neighbourhood in the list and searches it exhaustively again. If an improvement is found, the incumbent solution moves to it and the search starts again from the first neighbourhood. When a solution is a local optima for all the neighbourhoods, the algorithm applies a shake (more on this later) and returns to the first neighbourhood in the sequence. After a certain number of shakes (Set to 100 in our experiments), strategic oscillation is applied to increase or decrease the maximum allowed height and the process is repeated.

We have implemented VNS with three neighbourhoods that are searched in this order:

1 • Enclosing cube neighbourhood, Ncube

∞ • Axis aligned neighbourhood, Naxis

δ • ILP model, with Bp = Ncube with δ = 4 if piece p overlaps with other pieces and δ = 1 otherwise. 120 Chapter 6 Voxel-Based 3D Irregular Packing

The first two neighbourhoods are explored for all the overlapping pieces. We have chosen to use the cube neighbourhood with a small delta first, as it can provide very similar so- lutions by moving only one piece by one voxel. The axis aligned neighbourhood has the capacity to reach further away points and can often provide more dissimilar solutions, especially at early stages of the algorithm. If these neighbourhoods cannot improve the solution, we run the ILP model (6.2)-(6.8). To avoid the computational time it takes to run the model, in our experiments we have restricted its usage to only once per strategic oscillation and only in layouts where there is at most one pair of overlapping pieces.

Once we have hit a local optimum for all the neighbourhoods and before oscillating, we disrupt the solution with the aim of exploring a different part of the solution space. This disruption, also called shake, is performed in two steps: a shaking of the layout and a piece swap.

To perform the shaking, we move randomly all overlapping pieces within a neighbour- δ hood Ncube. The parameter δ of the neighbourhood is set such that it increases linearly depending on the number of disruption we are in. It will vary between a minimum value of δmin = 1, which is the size of the starting neighbourhood, and a maximum value of

δmax = 3. The actual parameter is calculated as δ = min (δmin, bρ (δmax + 1)c), where ρ is the ratio between the shakes performed and the maximum shakes allowed for the current height. The second step of the consists in swapping two overlapping pieces. To avoid the costly procedure of testing all possible swaps, we limit the amount of tests. We test the swap of each overlapping piece with k other pieces, chosen at random. Swaps involving pieces that do not overlap or are of the same type are avoided. In total we test a number between 1 and kn swaps, and perform the one that returns the best objective function, even if it does not improve the incumbent. From our experiments, we have found that the value k = 3 gave a reasonable trade-off between speed and quality and this is the value we have used in our computational experiments. These two steps complement each other. The shaking part creates many overlapping positions and enables the search algorithm to explore movements of pieces that might otherwise stay in non-overlapping positions that are perhaps not beneficial for the lay- out. The swap part allows to produce a larger change in the layout, that might be difficult to obtain with moving pieces only one at a time. Note that applying first the movement of the pieces creates many overlaps and therefore the swap of the pieces has a better opportunity to succeed than if we apply these two steps in a different order. During the progress of the VNS, we will find many local optima, that contain overlap, for the same height. However, the shake is only applied to the best of those layouts. Since the shake involves swapping two pieces, some of the swaps may not be beneficial in the long term and applying shakes always on the best current solution is a way of backtracking to a better solution before these swaps were performed. Chapter 6 Voxel-Based 3D Irregular Packing 121

The pseudo-code in Algorithm8 describes the core VNS algorithm, which is then embedded in a strategic oscillation scheme, as described earlier in Section 6.5.4. 122 Chapter 6 Voxel-Based 3D Irregular Packing

Algorithm 8 Variable Neighbourhood Search ∗ ∗ ∗ ∗ 1: Llocal = L = {l1, l2,... }; fitness = evaluate fitness(L ); t = 0 2: for i = 1 to MAX ITERATIONS do ∗ ∗ 3: f = f = evaluate fitness(Llocal) 4: while True do ∗ ∗ 5: fstart = f ; L = Llocal 6: for each piece p in an overlapping position do . Test first neighbourhood 0 1 7: for each L ∈ Ncube(L, p) do ∗ 0 8: f = fitness change(fitness, Llocal,L ) 9: if f > f ∗ then ∗ ∗ 0 10: f = f; Llocal = L 11: end if 12: end for 13: end for ∗ 14: if f ≤ fstart then . If no improvement, test the second neighbourhood 15: for each piece p in an overlapping position do 0 ∞ 16: for each L ∈ Naxis(L, p) do ∗ 0 17: f = fitness change(fitness, Llocal,L ) 18: if f > f ∗ then ∗ ∗ 0 19: f = f; Llocal = L 20: end if 21: end for 22: end for 23: end if ∗ 24: if f ≤ fstart and less than 2 pieces overlap and t < 1 then . ILP model 25: If feasible, set L0 as the solution of the ILP model (6.2)-(6.8) 26: f = evaluate fitness(L∗); t = 1 27: if f > f ∗ then ∗ ∗ 0 28: f = f; Llocal = L 29: end if 30: end if ∗ 31: if f ≤ fstart then 32: Break 33: end if 34: end while 35: if f ∗ > fitness∗ then 36: fitness∗ = f ∗ ∗ ∗ 37: L = Llocal 38: end if ∗ ∗ 39: Llocal = shake(L ) 40: end for Chapter 6 Voxel-Based 3D Irregular Packing 123

6.7 Computational experiments

We test our algorithms across a range of different instances to compare their performance in different scenarios. We found that in the 3D literature researchers have used mainly simple shapes, such as convex polyhedra (Stoyan et al., 2005; Pankratov et al., 2015), or non-convex shapes but defined by only a few vertices (Stoyan et al., 2004; Egeblad et al., 2009; Liu et al., 2015). In our experiments, we solve some of these instances and adapt one popular instance from the two-dimensional literature with similar properties, shapes0 from Oliveira et al. (2000). In order to test our algorithms in a broader scope, we also propose a small set of randomly generated instances – called blobs – and solve the packing of two examples of complex realistic instances. In Table 6.1 we summarise some of the key differences of these instances and, in the following sections we give a more detailed description about how they are generated and the results we obtain solving them with our algorithms. Note that we consider the orientation of the pieces fixed in all of them, regardless of the original description of the instance in that aspect.

Table 6.1: Instances solved and their features

Instance Source Convex Holes Pieces (types) Stoyan2 Stoyan et al. (2005)   12 (7) Stoyan3 Stoyan et al. (2005)   25 (7) Mergedi Egeblad et al. (2009)   15 - 75 (15) Experiment1 Liu et al. (2015)   36 (5) Experimenti Stoyan et al. (2004)   20 - 50 (10) Blobsi Generated   20 (10) Shapes 3D Adapted   43 (4) Engine Realistic   97 (56) Chess Realistic   32 (6)

We have implemented all our algorithms in C++ and ran them in a single 2.6 GHz core with 4 GB of memory, part of the IRIDIS HPC facility. The ILP model from Section 6.4 has been solved with IBM R ILOG R CPLEX R 12.6.1. All of our experiments have been given one hour of computational time and we report both the best and the average results obtained after 10 runs. Since all the methods tested use no-fit voxels, they have been pre-calculated and the generation and loading time is not included in the reported time of the algorithms.

6.7.1 Randomly generated instances

To create this set, we extended the shape generation presented in Robidoux et al. (2011) to three dimensions. We illustrate some of its steps in Figure 6.5 in an example to generate a 2D shape, since it is easier to appreciate in two dimensions. The method is as follows: 124 Chapter 6 Voxel-Based 3D Irregular Packing

Step 1 Start with a three-dimensional matrix of zeros, that will be the container of the future piece.

Step 2 Randomly assign to 1 the value of some elements in the matrix (Figure 6.5 (a)).

Step 3 Connect the points in a random order with discretised lines of a certain thickness. The last point is connected with the first in order to create a loop Figure 6.5 (b)

Step 4 Apply a Gaussian blur to the matrix. This step changes the values of the elements so they are between 0 and 1 Figure 6.5 (c)

Step 5 To convert to binary values again, apply a threshold, where values less than a parameter (around 0.005) are set to zero and values over that parameter are set to one Figure 6.5 (d).

In Figure 6.5 we show an example of how the steps of this method work in two dimensions, so it is easier to appreciate.

Figure 6.5: Example of the steps to generate a ‘blob’ in two dimensions. In (a) 9 points are drawn randomly, in (b) they are connected in a closed loop, in (c) a Gaussian blur is applied (the values between 0 and 1 are represented by the shade of grey) and in (d) values are set to either 0 or 1, depending on a threshold value. Chapter 6 Voxel-Based 3D Irregular Packing 125

There are a few parameters that can be controlled in this method. The first of them is the size of the matrix where the points are drawn. A larger size will mean a smoother shape, but more costly to compute. We also need to decide how many and how to select the random elements that are initially set to 1. More points give more opportunities for the shape to have irregularities, but too many will fill up the space of the shape too much and it can lose some interesting features, such as holes. For sampling the points, we have chosen a random uniform distribution in a cube, but there are other distributions available. Robidoux et al. (2011) suggest using distributions in a disc (uniform or normal) if one wishes to get rotation invariant shapes, but this does not seem relevant for our purposes.

The way the points are connected can also impact the shape. In our case we use straight lines, but other methods such as splines could give more smooth results. Another decision is how to add ‘thickness’ to these lines. Looking again at Figure 6.5 (b), we see that the lines in that case have more than one pixel in thickness. To explain this, let us think about the matrix as a grid of squares (or cubes, in 3D). When connecting two points, we first paint the squares of the grid intersected by the segment that connects the centre of the two points. By painting here we mean setting to 1 the corresponding matrix elements. To add ‘thickness’, we also paint the squares surrounding this initial line, if they are within a certain distance from it in any axis aligned direction. This distance is another parameter, that we call the thickness parameter. This results in painting with a kind of cross-hairs, the same concept as depicted in the neighbourhood from Figure 6.3. Finally, both the standard deviation of the Gaussian blur and the threshold have also an impact on the outcome.

Following this technique, we have generated nine instances in total, with three different sizes and three shape styles for each size. We hand-picked the parameters in order to define the three different styles, that we have called ‘round’, where pieces are closer to spheres or ellipsoids; ‘peaked’, where shapes have more pronounced concavities and more frequently holes; and ‘neutral’, which is somehow a middle ground between the two. In Figure 6.6 we show one piece representing each different style. The full description of the parameters used is reported in Table 6.2, where intervals indicate that the parameter has been randomly chosen in that interval.

The matrix size in Table 6.2 is indicative, as applying the Gaussian blur might set to positive some values outside the original cube where the points were sampled (this happens, for example, if one of the points was in the corner of the cube). To acknowledge this fact and ensure the shapes remained smooth, we allowed a maximum size of 100 × 100 × 100 for any shape, which seemed sufficient to avoid such cases. In Figure 6.7 we show a comparison of the results achieved by these methods for the instance blobs9.

Based on the size of the largest piece from each set, we chose the size of the container base to be (in voxels) 78 × 78 for the small instances, 84 × 84 for the medium instances 126 Chapter 6 Voxel-Based 3D Irregular Packing

Figure 6.6: Three pieces from the medium instances with style ‘round’ (left), ‘neutral’ (centre) and ‘peaked’ (right)

Table 6.2: Parameters used to generate the blobs instances

Name Size Style Matrix size Points Thickness Filter σ Threshold blobs1 Small Round 25 × 25 × 25 [2,20] 3 [3,5] 0.05 blobs2 Small Neutral 25 × 25 × 25 [2,15] [1,3] [1,2] 0.01 blobs3 Small Spiked 25 × 25 × 25 [3,7] 1 [1,2] 0.005 blobs4 Medium Round 35 × 35 × 35 [4,30] [4,5] [4,5] 0.05 blobs5 Medium Neutral 35 × 35 × 35 [4,25] [3,4] [2,3] 0.01 blobs6 Medium Spiked 35 × 35 × 35 [3,10] [1,3] [1,2] 0.005 blobs7 Large Round 45 × 45 × 45 [4,40] [4,5] [4,5] 0.05 blobs8 Large Neutral 45 × 45 × 45 [4,3] [3,4] [2,3] 0.01 blobs9 Large Spiked 45 × 45 × 45 [3,14] [1,3] [1,2] 0.005

Figure 6.7: Instance blobs9, best utilisations achieved our algorithms: 27.69% by ILS (left), 29.52% by ITS (centre) and 30.41% by VNS (right) Chapter 6 Voxel-Based 3D Irregular Packing 127 and 90 × 90 for the large instances. We performed 10 runs of one hour for each instance and we report the best, worst and average results in Table 6.3. We also report the achieved utilisations and their averages in Table 6.4.

Table 6.3: Comparison of results (final height, in voxels) for the blobs instances

Instance ILS ITS VNS Name Best Worst Avg Best Worst Avg Best Worst Avg blobs1 82 86 84.0 76 79 78.3 74 77 75.3 blobs2 67 70 68.2 61 62 61.5 58 60 59.4 blobs3 48 50 49.0 44 46 45.4 44 46 44.7 blobs4 210 222 217.6 199 214 207.8 184 192 188.7 blobs5 229 247 237.7 208 220 215.4 201 216 205.2 blobs6 93 100 97.5 88 93 90.1 85 89 87.8 blobs7 345 362 355.2 322 339 331.3 310 342 328.4 blobs8 375 393 384.8 335 355 346.8 339 377 363.8 blobs9 145 156 152.3 136 142 139.0 132 139 135.6

Table 6.4: Comparison of results (Percentage of container utilisation) for the blobs instances

Instance ILS ITS VNS Name Best Worst Avg Best Worst Avg Best Worst Avg blobs1 45.41 44.32 44.75 48.4 46.56 46.98 49.71 47.77 48.86 blobs2 38.12 36.48 37.45 41.86 41.19 41.53 44.03 42.56 43 blobs3 36.36 34.91 35.63 39.67 37.94 38.45 39.67 37.94 39.05 blobs4 46.33 43.83 44.73 48.9 45.47 46.84 52.88 50.68 51.57 blobs5 38.54 35.89 37.31 42.62 40.3 41.17 44.1 41.04 43.22 blobs6 32.45 30.18 30.97 34.3 32.45 33.51 35.51 33.91 34.38 blobs7 38.8 36.98 37.7 41.57 39.49 40.42 43.18 39.14 40.81 blobs8 30.31 28.93 29.55 33.93 32.02 32.79 33.53 30.15 31.3 blobs9 27.69 25.73 26.37 29.52 28.27 28.89 30.41 28.88 29.61 Average 37.11 35.25 36.05 40.09 38.19 38.95 41.45 39.12 40.2

These results show that the two algorithms working in the layout have a better per- formance in all the instances. Out of the two, VNS shows overall a better performance, as demonstrated by the average utilisation found being over 1% higher.

6.7.2 Instances from the literature

In order to compare our algorithms with previous work, we collected some instances available in the literature. It is worth noting that there are no publicly available imple- mentations of none these algorithms. For this reason, we limit our comparison to the instances reported in the original papers, which typically include simpler shapes that our randomly generated set or realistic instances. 128 Chapter 6 Voxel-Based 3D Irregular Packing

The first instance is a set of convex polyhedra that has been introduced by Stoyan et al. (2005) and later used in Egeblad et al. (2009). It consists of 7 convex shapes of similar volume that are repeated a number of times each. Stoyan et al. (2005) defined three sets Example1, Example2 and Example3, but we only solve the latter two, as Ex- ample1 contains one piece that has a dimension coinciding exactly with the width of the container and it becomes infeasible with our voxelisation parameters. In order to find more challenging instances, Egeblad et al. (2009) mixed this set with some shapes available in the work of Ikonen et al. (1997), creating the Mergedi instances, that we solve as well. The other two sets we consider are a set of non-convex polyhedra from Stoyan et al. (2004)(Experiment2 to Experiment5 ), that has been later solved by Liu et al. (2015) and a similar set of polyhedra proposed by Liu et al. (2015), Experiment1. However in the original work Experiment1 has been only solved allowing rotation, and therefore we do not solve exactly the same problem, since our orientations are fixed. To complete the tests with simpler shapes, we have also adapted a classical instance from the two-dimensional irregular packing literature, the set shapes0 from Oliveira et al. (2000). This set consists of 4 simple distinct shapes (one convex and three non- convex) and has been widely used in the research of the strip packing problem. To adapt these figures to three dimensions we maintained the original proportions of the original shapes and voxelised them in a grid of one unit. For two of the shapes, we preserved their symmetry along the third axis. For the remaining two, we extruded the shape along the third axis so they had a size of 6 voxels. We call our instance Shapes 3D and the final result can be seen in Figure 6.8. The container for this set has sides 20×20 and we maintained the same piece description (43 pieces in total) and fixed the orientation.

Figure 6.8: Instance Shapes 3D

In Table 6.5 and Table 6.6 we show the results of the final container height (in the same units as reported in the literature) we have obtained solving these instances with our methods, compared to previous results found in the literature. For our algorithms, we report both the best and the average height found out of 10 runs. In Table 6.7 we show the results we obtain for the newly create instance Shapes 3D, both utilisation and height over 10 runs. Chapter 6 Voxel-Based 3D Irregular Packing 129

Table 6.5: Comparison of results (final height) for the instances found in St05 (Stoyan et al., 2005) and 3DNEST (Egeblad et al., 2009)

Instance ILS ITS VNS St05 3DNEST Best Avg Best Avg Best Avg Best Best Example2 22.51 23.51 21.76 22.65 20.80 21.42 30.92 19.83 Example3 34.88 36.06 31.79 32.85 31.47 32.02 45.86 29.82 Merged1 20.52 21.58 18.47 19.39 18.30 19.07 - 16.99 Merged2 24.96 26.76 23.77 24.32 22.40 22.49 - 21.97 Merged3 29.58 29.58 26.85 28.13 24.45 26.28 - 24.04 Merged4 32.66 35.12 30.61 30.85 28.38 29.15 - 27.80 Merged5 35.74 36.42 32.49 33.77 29.92 31.41 - 30.13

Table 6.6: Comparison of results (final height) for the instances found by St04 (Stoyan et al., 2004) and HAPE3D (Liu et al., 2015)

Instance ILS ITS VNS St04 HAPE3D Best Avg Best Avg Best Avg Best Best Exp1 39.2 40.4 38.27 40.12 35.07 35.89 - 31.2* Exp2 31.27 33.48 28.93 31.29 28.93 29.63 32.00 36.90 Exp3 49.23 50.77 43.87 46.81 40.60 46.25 49.00 55.60 Exp4 62.77 66.17 58.80 61.93 56.00 61.39 63.21 71.80 Exp5 77.23 84.09 75.60 79.66 75.60 84.98 78.70 92.00

* The result of 31.2 for the instance Exp1 is obtained by allowing 8 different rotations, which are not allowed in the other algorithms.

Table 6.7: Results for the instance Shapes 3D

Method Utilisation Height (voxels) Best Average Std Best Average Std ILS 40% 39.8% 0.3% 56 56.7 0.5 VNS 45.1% 43.9% 0.6% 50 51.4 0.7 ITS 44.3% 42.9% 0.6% 51 52.6 0.7

As we have shown with the blobs instances, VNS performs better than the other meth- ods we have proposed in this work. In comparison with the literature, ITS and VNS clearly outperform the results from Stoyan et al. (2005), Stoyan et al. (2004) and Liu et al. (2015), despite the usage of voxels, that imply the packing of slightly larger pieces. Not only that, the container that may also be smaller, as we need to round down the number of voxels used to represent it in order to ensure feasibility. The exception is the 3DNEST algorithm from Egeblad et al. (2009), that achieves higher utilisations in this set of instances, with the exception of Merged5, where the Variable Neighbourhood Search algorithm obtains a slightly higher utilisation. The resulting lay- out for Merged5 obtained by VNS can be seen in Figure 6.9.

The instance Merged5 contains 75 pieces and it is the largest instance solved by Egeblad et al. (2009). In Figure 6.10 we have plotted the utilisation found by 3DNest 130 Chapter 6 Voxel-Based 3D Irregular Packing

Figure 6.9: Best layout found for the instance Merged5 found by VNS, with a height corresponding height of 29.92 in the mesh representation and our algorithms against the Merged instances, sorted by number of pieces. From the graph we can infer that voxel based methods seem more consistent as the problem size grows. In the smallest instance the gap is larger, as overestimating the pieces is more likely to be a significant burden for voxel-based methods. Unfortunately, we do not have access to run 3DNEST on larger or more complex instances, so the comparison between the algorithms is limited.

VNS 3DNEST

0.455 0.45 0.445 0.44 0.435 0.43 utilisation 0.425

Best 0.42 0.415 0.41 0.405 0 1020304050607080 Number of pieces

Figure 6.10: Comparison of best utilisation found by Egeblad et al. (2009) and VNS for the different Merged instances

6.7.3 Realistic instances

We also solve two instances based in models downloaded from a popular 3D printing online community1. In order to solve a meaningful problem, we chose two designs that

1http://www.thingiverse.com/ Chapter 6 Voxel-Based 3D Irregular Packing 131 have shapes with a certain complexity and that are themselves a collection of different parts. We chose a realistic model of an engine2 (See Figure 6.11 for an example of a sample a piece, already voxelised) and a chess set.

These models were available in the form of a triangular mesh and to convert them to voxels, we intersected the parts with three-dimensional grids. To perform this process, we have used Patrick Min’s binvox software3; we applied exact setting, to ensure the voxelised model overestimates the original one and therefore our solutions are always feasible.

Based on the size of the pieces and since they were available in different scales, we decided to pack the Engine in a container with a base of 500 × 500 voxels, where one voxel represents 6 units and the Chess set in a container of size 50 × 50, where one voxel is mapped to one unit of the original model. The results are shown in Table 6.8 for the Engine instance and in Table 6.9 for the Chess instance. In both tables, the utilisations and heights are in terms of the voxelised instance.

Figure 6.11: Example piece from the Engine instance

The resulting layout for the best run of each method is shown in Figure 6.12 for Engine and in Figure 6.13 for Chess.

Table 6.8: Results for the instance Engine, after 10 runs of 1 hour each

Method Utilisation Height (voxels) Best Average Std Best Average Std ILS 30.43% 28.76% 0.8% 486 515.4 14.55 ITS 32.3% 31.7% 0.5% 462 470.4 8.1 VNS 33.2% 31.8% 1.2% 450 469.8 18.3

The results in these instances also suggest that Variable Neighbourhood Search obtains the best results; and both of the metaheuristics allowing overlap outperform the Iterated

2Model designed by Eric Harrell, available at http://www.thingiverse.com/thing:644933 3http://www.patrickmin.com/binvox/ 132 Chapter 6 Voxel-Based 3D Irregular Packing

Figure 6.12: Best layout found for the instance Engine by ILS (left), ITS (centre) and VNS (right)

Table 6.9: Results for the instance Chess, after 10 runs of 1 hour each

Method Utilisation Height (voxels) Best Average Std Best Average Std ILS 32.8% 32.8% 0.0% 152 152 0.0 ITS 37.2% 35.9% 0.7% 134 138.9 2.7 VNS 38.0% 37.4% 0.6% 131 133.3 2.2

Figure 6.13: Best layout found for the instance Chess by ILS (left), ITS (centre) and VNS (right)

Local Search algorithm, that cannot reach the quality obtained by the more sophisticated algorithms. Chapter 6 Voxel-Based 3D Irregular Packing 133

6.7.4 Discussion

Across the different instance types, we have seen that our algorithms perform in a similar way, with Variable Neighbourhood Search and Iterated Tabu Search being consistently ahead of Iterated Local Search. This suggests that searching over overlapping layouts has a benefit over searching over the placement rules of the constructive algorithm. This is not surprising, as the solution space where ILS searches for solutions is very restricted by the way the constructive algorithm works, and there are many (potentially good) solutions that it can simply not reach. One advantage of ILS, however, is its ability to produce feasible solutions quicker; as well as producing very different layouts in each iterations. This might be useful in some settings where time is important, or where a range of different objectives need to be taken into account.

Between VNS and ITS, we find that VNS is slightly better in the majority of the tests. While we have seen in our informal testing that the intensification part works similarly in both algorithms, the diversification in the Variable Neighbourhood Search seems to perform better. This is due to the shake phase, where creating randomised overlap between pieces and allowing for piece swaps, results in more dramatic changes in the layout. In turn, this leads to a ‘worse’ starting point, that provides greater opportunities for the neighbourhood exploration to reach better local optima.

6.8 Conclusion

In this chapter we present a discretised approach to the three-dimensional irregular open dimension problem. We propose to represent the pieces by voxels and develop the concept of no-fit voxel, in which we base our algorithms. The usage of voxels and the no-fit voxel allow for a quick and robust way of dealing with complex pieces in packing problems. With these tools at hand, we formulate the problem as an ILP model and propose a constructive algorithm and an Iterated Local Search that extends the quality of the constructive algorithm. To further improve the packing quality, we propose two meta- heuristic techniques that search over the space of solutions containing overlap. We collected a set of instances from the literature and added our own to create a bench- mark set of instances with different characteristics where we could test our algorithms. We show that both metaheuristic techniques are very competitive, even when compared to more complex and accurate geometrical representations, such as the polygonal mesh or the phi-functions. The choice of voxels and the no-fit voxel to represent and pack the pieces is backed by our computational experiments, where we show that our results outperform most of the pre-existing literature, with the exception of the 3DNEST al- gorithm; but as the instance sizes grow larger, VNS is able to find a better layout than 134 Chapter 6 Voxel-Based 3D Irregular Packing

3DNEST as well. Furthermore, we are able to solve efficiently real-life instances con- taining complex geometries as typically used in the 3D printing industry, where we show that both the Iterated Tabu Search and the Variable Neighbourhood Search can find competitive results, with significant improvement over our constructive-based Iterated Local Search. Chapter 7

Conclusions

Throughout this thesis we have tackled cutting and packing problems with diverse appli- cations, in the military, in archaeology and in the 3D printing industry. This continues to prove that cutting and packing is a vibrant field, with possibly many applications yet to discover. Let us review briefly what have been our main contributions.

In Chapters4 and5 we have modelled two applied problems that were new to the literature, providing successful results in both cases. In Chapter4 this included carefully identifying problem-specific constraints appearing in the air transport industry, that were introduced in a MILP model and solved both exactly and with a metaheuristic algorithm. In Chapter5 we investigated a problem arising from archaeology and this is, to our knowledge, the first time this application area is explored in cutting and packing. On solving this problem we proposed a new algorithm that is capable to handle free rotation of shapes, which is rarely allowed in the literature, but still relevant for some industrial processes (for example, in garment manufacturing with homogeneous fabrics). We have also shown that this algorithm is able to solve some instances from the irregular strip packing problem that are known to have perfect utilisation, so we believe there is potential to adapt it for the mainstream nesting problem with free rotations. In Chapter6 we studied a three-dimensional irregular packing problem. While this problem is not new to the literature, there were few publications dealing with it. In our work, we opted for representing the geometry by voxels, with the intention of tackling realistic problems in reasonable computational times. While this problem is not as directly based in a specific practical application as the previous two, we believe that the main contribution of our research here is identifying the tools to approach three- dimensional packing problems using voxels. We have proposed a tool, the no-fit voxel that emulates the well known no-fit polygon. With this construction in hand, we were able to provide an ILP model and set the basis for our metaheuristic algorithms. We investigated a number of neighbourhoods and metaheuristic approaches to the problem, that were tested in a benchmark set of instances and against pre-existing literature. Our approach was competitive with most of the previous work, with the best known results 135 136 Chapter 7 Conclusions for some instances. We hope that this set of instances, that will be made available online, will be used as a benchmark for further research, as it contains instances with different characteristics, including some complex and realistic three-dimensional models. All of the three problems that involved irregular shapes and we used a different technique to deal with them. In Chapter4 geometry was not a critical aspect, so we reduced the dimension of the problem and focused on the critical constraints (weight and positioning of the items). In Chapter5 there was uncertainty on the real shape of the pieces to be packed, so we proposed to relax the overlap constraint in order to consider this. Finally, our aim in Chapter6 was to deal with complex three-dimensional shapes, and we used a voxel approximation to be able to treat them efficiently.

Before concluding, we would like to point some avenues of further research for each of our chapters. In Chapter4 we have tackled the problem of packing items into helicopters at a strategic level, in order to evaluate the potential of the fleets. Nevertheless, the work lends itself to consider also the operational level, deciding how to pack the items before performing a particular mission. In our conversations with Dstl, we identified some potential constraints for this situation. While it is unlikely that there would be a lack of space in the aircraft, the distribution of the items within the helicopter is critical to its stability. To correctly model this one would have to consider the three-dimensional shapes of the objects and their position in the cabin. For this situation, there is scope to link this model with the techniques from Chapter6, modified to take into account maintaining the stability as the main objective.

Our work in Chapter5 can be extended in two ways. The first one refers to the application area. We believe that the solutions we have found can provide a starting point for discussions with experts in Aztec archaeology and anthropology; that can lead to refinements to our problem description and, potentially, to new constraints derived from their problem-specific knowledge. As a result, a next iteration of this work could find more accurately the potential layouts of the terrains described in the codex. The second one relates to the capabilities of the algorithm itself. Our handling of rotations seemed to be successful in finding high utilisation layouts in the instances we have tested (the ones with full utilisation). For this reason, we believe that it is likely that it can be adapted to tackle other irregular two-dimensional packing problems with free rotations, for example the strip packing problem, where the container width is fixed and the length is to be minimised. To further prove this point, we ran our algorithm in a standard instance from the nesting literature Shapes1. This instance was designed to be solved allowing only rotations of 0 and 180 degrees, and the minimal length known for it is 52, due to Elkeran(2013). Some other publications have solved this instance allowing more than two rotation angles, or even free rotations. With 8 rotation angles Liu & Ye(2011) finds 56 .7, and when using 16 rotation angles the same algorithm finds only 58.93, a clear indication on how the problem does not become necessarily easier when allowing more rotations. With free rotations, Nielsen(2007) found a length of Chapter 7 Conclusions 137

54.02. Our algorithm does not allow for an open dimension, so for our informal testing we tried to pack the pieces from the instance in three fixed lengths: 52, 51 and 50. Our results were encouraging, as we were able to find layouts with a negligible amount of overlap for the lengths of 52 and 51, and with very little overlap for 50 (under 0.06%). We show the resulting layouts of these tests in Figure 7.1.

Figure 7.1: Best layouts found for the Shapes0/Shapes1 instance with free rota- tion. For lengths 52 and 51, the overlap is less than 0.01% of the instance area. For the length 50 the overlap amounts to 0.06% of the instance area.

In light of these preliminary results, we are confident that this work can be adapted to tackle problems with irregular shapes and free rotations efficiently.

Finally, the research in Chapter6 is an introductory work to pack three-dimensional irregular shapes. For this reason, it lends itself to many extensions. We believe that, perhaps, the practical application with most potential to benefit of this technique is 3D printing. However, before applying it there are a number of constraints to consider. One of the most important in this regard would be the interlocking; as some positions of items can result in a physical arrangement that does not allow to separate them once printed. Furthermore, it is reasonable to think that practical applications allow some kind of rotation and that, in fact, they would alter the quality of the printed objects; so a meaningful extension would be to include rotation and account for its quality on the objective function. It also seems that all of our methods would benefit from adopting a tree-structure to represent the voxels. While this is a computational improvement, rather than theoretical, it might increase dramatically the applicability of these methods in terms of the number of pieces and fine detail they can handle in a reasonable time.

In essence, we feel that in the course of the thesis we have answered as many questions as we have raised; but hopefully these pages will serve as a stepping stone to future researchers willing to deal with these challenges. To facilitate this, we aim to make our instances public and available on the ESICUP website1, as well as disseminate our research chapters in peer-reviewed journals.

1 https://paginas.fe.up.pt/~esicup/

References

Abeysooriya, Ranga P., Bennell, Julia A., & Martinez-Sykora, Antonio. 2018. Jostle heuristics for the 2D-irregular shapes bin packing problems with free rotation. Inter- national Journal of Production Economics, 195, 12–26.

Alvarez-Valdes, R., Martinez, A., & Tamarit, J.M. 2013. A branch & bound algorithm for cutting and packing irregularly shaped pieces. International Journal of Production Economics, 145(2), 463–477.

Annamalai Vasantha, Gokula Vijaykumar, Jagadeesan, Ananda Prasanna, Corney, Jonathan Roy, Lynn, Andrew, & Agrawal, Anupam. 2016. Crowdsourcing solutions to 2D irregular strip packing problems from Internet workers. International Journal of Production Research, 54(14), 4104–4125.

Araya, I., & Riff, M.-C. 2014. A beam search approach to the container loading problem. Computers & Operations Research, 43, 100–107.

Art, RC. 1966. An approach to the two dimensional irregular cutting stock problem. Tech. Rep. 36.Y08, IBM Cambridge Scientific Centre.

Baert, J, Lagae, A, & Dutr´e,P. 2013. Out-of-core construction of sparse voxel octrees. Pages 27–32 of: Proceedings - High-Performance Graphics 2013, HPG 2013.

Balas, Egon, Glover, Fred, & Zionts, Stanley. 1965. An Additive Algorithm for Solving Linear Programs with Zero-One Variables Author(s):. Operations Research, 13(4), 517–549.

Baumers, Martin, Tuck, Chris, Wildman, Ricky, Ashcroft, Ian, Rosamond, Emma, & Hague, Richard. 2013. Transparency Built-in: Energy Consumption and Cost Esti- mation for Additive Manufacturing. Journal of Industrial Ecology, 17(3), 418–431.

Bean, James C. 1994. Genetic Algorithms and Random Keys for Sequencing and Opti- mization. ORSA Journal on Computing, 6(2), 154–160.

Bekker, Henk, & Roerdink, Jos B. T. M. 2001. An Efficient Algorithm to Calculate the Minkowski Sum of Convex 3D Polyhedra. Springer, Berlin, Heidelberg.

Bellman, Richard. 1957. Dynamic programming. Princeton UniversityPress.

139 140 REFERENCES

Belov, G, & Scheithauer, G. 2006. A branch-and-cut-and-price algorithm for one- dimensional stock cutting and two-dimensional two-stage cutting. European Journal of Operational Research, 171(1), 85–106.

Bennell, J., Scheithauer, G., Stoyan, Y., & Romanova, T. 2010. Tools of mathemati- cal modeling of arbitrary object packing problems. Annals of Operations Research, 179(1), 343–368.

Bennell, J., Scheithauer, G., Stoyan, Y., Romanova, T., & Pankratov, A. 2015. Optimal clustering of a pair of irregular objects. Journal of Global Optimization, 61(3), 497– 524.

Bennell, JA, & Oliveira, JF. 2008. The geometry of nesting problems: A tutorial. European Journal of Operational Research, 184(2), 397–415.

Bennell, Julia A., & Dowsland, Kathryn A. 1999. A tabu thresholding implementation for the irregular stock cutting problem. International Journal of Production Research, 37(18), 4259–4275.

Bennell, Julia A., & Dowsland, Kathryn A. 2001. Hybridising Tabu Search with Opti- misation Techniques for Irregular Stock Cutting. Management Science, 47(8), 1160– 1172.

Bennell, Julia a, & Oliveira, Jos´eF. 2009. A tutorial in irregular shape packing problems. The Journal of the Operational Research Society, 60, S93–S105.

Bennell, Julia A, & Song, Xiang. 2008. A comprehensive and robust procedure for obtaining the nofit polygon using Minkowski sums. Computers & Operations Research, 35(1), 267–281.

Bennell, Julia A., & Song, Xiang. 2010. A beam search implementation for the irregular shape packing problem. Journal of Heuristics, 16(2), 167–188.

Bennell, Julia A., Dowsland, Kathryn A., & Dowsland, William B. 2000. The irregular cutting-stock problem - a new procedure for deriving the no-fit polygon. Computers and Operations Research, 28(3), 271–287.

Bennell, Julia A., Soon Lee, Lai, & Potts, Chris N. 2013. A genetic algorithm for two-dimensional bin packing with due dates. International Journal of Production Economics, 145(2), 547–560.

Birgin, E G, Lobato, R D, & Morabito, R. 2010. An effective recursive partitioning ap- proach for the packing of identical rectangles in a rectangle. Journal of the Operational Research Society, 61(2), 306–320.

Bischoff, Eberhard E., & Marriott, Michael D. 1990. A comparative evaluation of heuris- tics for container loading. European Journal of Operational Research, 44(2), 267–276. REFERENCES 141

Blazewicz, J., Hawryluk, P., & Walkowiak, R. 1993. Using a tabu search approach for solving the two-dimensional irregular cutting problem. Annals of Operations Research, 41(4), 313–325.

Bortfeldt, Andreas, & W¨ascher, Gerhard. 2013. Constraints in container loading – A state-of-the-art review. European Journal of Operational Research, 229(1), 1–20.

Burke, E. K., Hellier, R. S. R., Kendall, G., & Whitwell, G. 2010a. Irregular Packing Using the Line and Arc No-Fit Polygon. Operations Research, 58(4-part-1), 948–970.

Burke, Edmund, Hellier, Robert, Kendall, Graham, & Whitwell, Glenn. 2006a. A New Bottom-Left-Fill Heuristic Algorithm for the Two-Dimensional Irregular Packing Problem. Operations Research, 54(3), 587–601.

Burke, Edmund K, Hyde, Matthew R, & Kendall, Graham. 2006b. Evolving Bin Packing Heuristics with Genetic Programming. Parallel from Nature - PPSN IX, 4193, 860–869.

Burke, Edmund K., Hyde, Matthew, Kendall, Graham, Ochoa, Gabriela, Ozcan,¨ Ender, & Woodward, John R. 2010b. A Classification of Hyper-heuristic Approaches.

Burke, E.K., Hellier, R.S.R., Kendall, G., & Whitwell, G. 2007. Complete and robust no-fit polygon generation for the irregular stock cutting problem. European Journal of Operational Research, 179(1), 27–49.

Burke, EK, Hyde, M, Kendall, G, & Woodward, J. 2010c. A genetic programming hyper- heuristic approach for evolving 2-D strip packing heuristics. IEEE Transactions on evolutionary computation, 14(6), 942–958.

Byholm, Thomas, Toivakka, Martti, & Westerholm, 2009. Effective packing of 3- dimensional voxel-based arbitrarily shaped particles. Powder Technology, 196(2), 139–146.

Cagan, Jonathan, Degentesh, Drew, & Yin, Su. 1998. A simulated annealing-based algorithm using hierarchical models for general three-dimensional component layout. Computer-Aided Design, 30(10), 781–790.

Caprara, Alberto, & Toth, Paolo. 2001. Lower bounds and algorithms for the 2- dimensional vector packing problem. Discrete Applied , 111(3), 231–262.

Chazelle, Bernard. 1989. An optimal algorithm for intersecting three-dimensional convex polyhedra. 30th Annual Symposium on Foundations of Computer Science, 21(4), 671– 696.

Chazelle, Bernard. 1991. Triangulating a simple polygon in linear time. Discrete & Computational Geometry, 6(3), 485–524. 142 REFERENCES

Cherfi, N., & Hifi, M. 2008. A column generation method for the multiple-choice multi- dimensional knapsack problem. Computational Optimization and Applications, 46(1), 51–73.

Chernov, N., Stoyan, Yu., Romanova, T., & Pankratov, A. 2012. Phi-Functions for 2D Objects Formed by Line Segments and Circular Arcs. Advances in Operations Research, 2012, 1–26.

Coffman, Edward G., Csirik, J´anos, Galambos, G´abor, Martello, Silvano, & Vigo, Daniele. 2013. Bin Packing Approximation Algorithms: Survey and Classification. Pages 455–531 of: Handbook of Combinatorial Optimization. New York, NY: Springer New York.

Coffman, E.G., Garey, MR, & Johnson, DS. 1984. Approximation Algorithms for Bin- Packing – An Updated Survey. Pages 49–106 of: Algorithm Design for Computer System Design, vol. 284.

Cook, William. 2010. Fifty-Plus Years of Combinatorial Integer Programming. Pages 387–430 of: 50 Years of Integer Programming 1958-2008. Berlin, Heidelberg: Springer Berlin Heidelberg.

Cowling, Peter, Kendall, Graham, & Soubeiga, Eric. 2001. A hyperheuristic approach to scheduling a sales summit. Practice and Theory of Automated Timetabling III, 176–190.

Crowder, H, & Padberg, M. W. 1980. Solving Large-Scale Symmetric Travelling Sales- man Problems to Optimality.

Cuninghame-Green, Ray. 1989. Geometry, shoemaking and the Milk Tray problem. New Scientist, 123(1677), 50–53.

Dahmani, Nadia, Clautiaux, Fran¸cois,Krichen, Saoussen, & Talbi, El-Ghazali. 2013. It- erative approaches for solving a multi-objective 2-dimensional vector packing problem. Computers & Industrial Engineering, 66(1), 158–170.

Dahmani, Nadia, Clautiaux, Fran¸cois,Krichen, Saoussen, & Talbi, El-Ghazali. 2014. Self-adaptive metaheuristics for solving a multi-objective 2-dimensional vector packing problem. Applied Soft Computing, 16, 124–136.

Daniels, Karen, Li, Zhenyu, & Milenkovic, Victor J. 1994. Multiple containment methods. Tech. rept.

Dantzig, G., Fulkerson, R., & Johnson, S. 1954. Solution of a Large-Scale Traveling- Salesman Problem. Journal of the Operations Research Society of America, 2(4), 393–410. de Carvalho, JMV. 1999. Exact solution of bin-packing problems using column genera- tion and branch-and-bound. Annals of Operations Research, 86, 629–659. REFERENCES 143 de Korte, A.C.J., & Brouwers, H.J.H. 2013. Random packing of digitized particles. Powder Technology, 233, 319–324.

Delorme, Maxence, Iori, Manuel, & Martello, Silvano. 2016. Bin packing and cutting stock problems: Mathematical models and exact algorithms. European Journal of Operational Research, 255(1), 1–20.

Dickinson, JK John K, & Knopf, George K GK. 1998. Serial packing of arbitrary 3D objects for optimizing layered manufacturing. Pages 130–138 of: Proc. SPIE, vol. 3522.

Dighe, R, & Jakiela, MJ. 1995. Solving pattern nesting problems with genetic algorithms employing task decomposition and contact detection. Evolutionary Computation.

Dobkin, David P., & Kirkpatrick, David G. 1985. A linear algorithm for determining the separation of convex polyhedra. Journal of Algorithms, 6(3), 381–392.

D´osa,G, & Sgall, J. 2013. First Fit bin packing: A tight analysis. Pages 538–549 of: 30th Symposium on Theoretical Aspects of Computer Science.

D´osa,Gy¨orgy. 2007. The Tight Bound of First Fit Decreasing Bin-Packing Algorithm Is FFD(I) ≤ 11/9OPT(I) + 6/9. Pages 1–11 of: Combinatorics, Algorithms, Proba- bilistic and Experimental Methodologies.

Dowsland, K. A., Dowsland, W. B., & Bennell, J. A. 1998. Jostling for position: Lo- cal improvement for irregular cutting patterns. Journal of the Operational Research Society, 49(6), 647–658.

Dowsland, Kathryn A. 1987. An exact algorithm for the pallet loading problem. Euro- pean Journal of Operational Research, 31(1), 78–84.

Dowsland, Kathryn A. 1993. Some experiments with simulated annealing techniques for packing problems. European Journal of Operational Research, 68(3), 389–399.

Dowsland, Kathryn A., Vaid, Subodh, & Dowsland, William B. 2002. An algorithm for polygon placement using a bottom-left strategy. European Journal of Operational Research, 141(2), 371–381.

Dumitrescu, Irina, & Stutzle, Thomas. 2003. Combinations of local search and exact algorithms. Applications of Evolutionary Computation, 2611, 211–223.

Eastman, W L. 1958. Linear Programming with Pattern Constraints. Ph.D. thesis.

Edelkamp, Stefan, & Wichern, Paul. 2015. Packing Irregular-Shaped Objects for 3DPrinting. In: H¨olldobler,Steffen, Kr¨otzsch, Markus, Pe˜naloza,Rafael, & Rudolph, Sebastian (eds), KI 2015: Advances in Artificial Intelligence: 38th Annual German Conference on AI. Lecture Notes in Computer Science, vol. 9324. Cham: Springer International Publishing. 144 REFERENCES

Egeblad, Jens, & Pisinger, David. 2009. Heuristic approaches for the two- and three- dimensional knapsack packing problem. Computers & Operations Research, 36(4), 1026–1049.

Egeblad, Jens, Nielsen, Benny K., & Odgaard, Allan. 2007. Fast neighborhood search for two- and three-dimensional nesting problems. European Journal of Operational Research, 183(3), 1249–1266.

Egeblad, Jens, Nielsen, Benny K., & Brazil, Marcus. 2009. Translational packing of arbitrary polytopes. Computational Geometry, 42(4), 269–288.

Egeblad, Jens, Garavelli, Claudio, Lisi, Stefano, & Pisinger, David. 2010. Heuristics for container loading of furniture. European Journal of Operational Research, 200(3), 881–892.

Eilon, Samuel, & Christofides, Nicos. 1971. The Loading Problem. Management Science, 17(5), 259–268.

Elkeran, Ahmed. 2013. A new approach for sheet nesting problem using guided cuckoo search and pairwise clustering. European Journal of Operational Research, 231(3), 757–769.

Erd¨os,P., & Graham, R L. 1975. On packing squares with equal squares. Journal of Combinatorial Theory, Series A, 19(1), 119–123.

Fekete, Sandor P., Fekete, Sandor P., & Schepers, J¨org.2000a. On more-dimensional packing I: Modeling.

Fekete, Sandor P., Fekete, Sandor P., & Schepers, J¨org.2000b. On more-dimensional packing II: Bounds.

Fekete, Sandor P., Fekete, Sandor P., & Schepers, J¨org.2000c. On more-dimensional packing III: Exact Algorithms.

Fischetti, Matteo, & Luzzi, Ivan. 2009. Mixed-integer programming models for nesting problems. Journal of Heuristics, 15(3), 201–226.

Fleszar, Krzysztof, & Hindi, Khalil S. 2002. New heuristics for one-dimensional bin- packing. Computers & Operations Research, 29(7), 821–839.

Fogel, Efi, & Halperin, Dan. 2007. Exact and efficient construction of Minkowski sums of convex polyhedra with applications. Computer-Aided Design, 39(11), 929–940.

Fowler, Robert J., Paterson, Michael S., & Tanimoto, Steven L. 1981. Optimal packing and covering in the plane are NP-complete. Information Processing Letters, 12(3), 133–137.

Garey, MR, & Johnson, DS. 1979. Computers and intractability: a guide to the theory of NP-completeness. 1979. San Francisco, LA: Freeman. REFERENCES 145

Gavranovi´c,Haris, & Buljubaˇsi´c,Mirsad. 2014. An efficient local search with noising strategy for Google Machine Reassignment problem. Annals of Operations Research, 242(1), 19–31.

Gavranovi´c,Haris, Buljubaˇsi´c,Mirsad, & Demirovi´c,Emir. 2012. Variable Neighbor- hood Search for Google Machine Reassignment problem. Electronic Notes in Discrete Mathematics, 39, 209–216.

Gehring, H., & Bortfeldt, A. 1997. A Genetic Algorithm for Solving the Container Loading Problem. International Transactions in Operational Research, 4(5-6), 401– 418.

George, J.A., & Robinson, D.F. 1980. A heuristic for packing boxes into a container. Computers & Operations Research, 7(3), 147–156.

Ghosh, Pijush K. 1991. An algebra of polygons through the notion of negative shapes. CVGIP: Image Understanding, 54(1), 119–144.

Gilmore, P. C., & Gomory, R. E. 1961. A Linear Programming Approach to the Cutting- Stock Problem. Operations Research, 9(6), 849–859.

Gilmore, P. C., & Gomory, R. E. 1965. Multistage Cutting Stock Problems of Two and More Dimensions. Operations Research, 13(1), 94–120.

Gilmore, PC, & Gomory, RE. 1963. A linear programming approach to the cutting stock problem-Part II. Operations research, 11(6), 863–888.

Glover, F. 1989. Tabu search-part I. ORSA Journal on computing.

Glover, F, & Marti, R. 2011. Tabu search. Chap. Tabu Searc of: Panos M. Pardalos, Ding-Zhu Du, & Graham, Ronald (eds), Handbook of Combinatorial Optimization.

Gomes, A. Miguel, & Oliveira, Jos´eF. 2006. Solving Irregular Strip Packing problems by hybridising simulated annealing and linear programming. European Journal of Operational Research, 171(3), 811–829.

Gomes, A.Miguel, & Oliveira, Jos´eF. 2002. A 2-exchange heuristic for nesting problems. European Journal of Operational Research, 141(2), 359–370.

Gomory, Ralph E. 1958. Outline of an algorithm for integer solution to linear programs. Bulletin of Amerieal Mathematical Society, 64(5), 275–278.

Gon¸calves, Jos´eFernando, & Resende, Mauricio G. C. 2011. Biased random-key genetic algorithms for combinatorial optimization. Journal of Heuristics, 17(5), 487–525.

Gon¸calves, Jos´eFernando, & Resende, Mauricio G.C. 2013. A biased random key genetic algorithm for 2D and 3D bin packing problems. International Journal of Production Economics, 145(2), 500–510. 146 REFERENCES

Hachenberger, Peter. 2009. Exact Minkowksi Sums of Polyhedra and Exact and Efficient Decomposition of Polyhedra into Convex Pieces. Algorithmica, 55(2), 329–345.

Hadjiconstantinou, Eleni, & Christofides, Nicos. 1995. An exact algorithm for general, orthogonal, two-dimensional knapsack problems. European Journal of Operational Research, 83(1), 39–56.

Harvey, H. R., & Williams, B. J. 1980. Aztec Arithmetic: Positional Notation and Area Calculation. Science, 210(4469), 499–505.

Harvey, William D, & Ginsberg, Matthew L. 1995. Limited Discrepancy Search. Pages 607–613 of: 14th International Joint Conference on Artificial Intelligence, IJCAI95, vol. 1.

Hertel, Stefan, M¨antyl¨a,Martti, Mehlhorn, Kurt, & Nievergelt, Jurg. 1984. Space sweep solves intersection of convex polyhedra. Acta Informatica, 21(5), 501–519.

Hifi, Mhand, & M’Hallah, Rym. 2009. A Literature Review on Circle and Problems: Models and Methodologies. Advances in Operations Research, 2009, 1–22.

Hifi, Mhand, Kacem, Imed, N`egre,St´ephane,& Wu, Lei. 2010. A linear programming approach for the three-dimensional bin-packing problem. Electronic Notes in Discrete Mathematics, 36(C), 993–1000.

Hoffmann, Christoph M. 1989. The Problems of Accuracy and Robustness in Geometric Computation. Computer, 22(3), 31–39.

Hoffmann, Rodolfo, Riff, Maria Cristina, Montero, Elizabeth, & Rojas, Nicolas. 2015. Google challenge: A hyperheuristic for the Machine Reassignment Problem. Pages 846–853 of: 2015 IEEE Congress on Evolutionary Computation, CEC 2015 - Pro- ceedings.

Ikonen, I, Biles, WE, Kumar, A, Wissel, JC, & Ragade, RK. 1997. A Genetic Algorithm for Packing Three-Dimensional Non-Convex Objects Having Cavities and Holes. Pro- ceedings of the 7th International Conference on Genetic Algorithms, East Lansing, Michigan, Morgan Kaufmann Publishers, 591–598.

Imahori, Shinji, & Yagiura, Mutsunori. 2010. The best-fit heuristic for the rectangular strip packing problem: An efficient implementation and the worst-case approximation ratio. Computers and Operations Research, 37(2), 325–333.

Imamichi, Takashi, Yagiura, Mutsunori, & Nagamochi, Hiroshi. 2009. An iterated lo- cal search algorithm based on nonlinear programming for the irregular strip packing problem. Discrete Optimization, 6(4), 345–361.

Ja´skowski, W., Szubert, M., & Gawron, P. 2015. A hybrid MIP-based large neighborhood search heuristic for solving the machine reassignment problem. Annals of Operations Research, 242(1), 33–62. REFERENCES 147

Jia, X, & Williams, R.a. 2001. A packing algorithm for particles of arbitrary shapes. Powder Technology, 120(3), 175–186.

Jim´enez,P., Thomas, F., & Torras, C. 2001. 3D collision detection: a survey. Computers & Graphics, 25(2), 269–285.

Johnson, D. S., Demers, A, Ullman, J. D., Garey, M. R., & Graham, R. L. 1974. Worst- Case Performance Bounds for Simple One-Dimensional Packing Algorithms. SIAM Journal on Computing, 3(4), 299–325.

Jones, Donald R. 2013. A fully general, exact algorithm for nesting irregular shapes. Journal of Global Optimization, 59(2-3), 367–404.

Jorge y Jorge, Mar´ıadel Carmen, Williams, Barbara J., Garza-Hume, C. E., & Olvera, Arturo. 2011. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara. Proceedings of the National Academy of Sciences of the United States of America, 108(37), 15053–15057.

Jourdan, L., Basseur, M., & Talbi, E. G. 2009. Hybridizing exact methods and meta- heuristics: A taxonomy. European Journal of Operational Research, 199(3), 620–629.

Kantorovich, L. V. 1960. Mathematical Methods of Organizing and Planning Produc- tion. Management Science, 6(4), 366–422.

Kellerer, Hans, & Kotov, Vladimir. 2003. An approximation algorithm with absolute worst-case performance ratio 2 for two-dimensional vector packing. Operations Re- search Letters, 31(1), 35–41.

Khachiyan, L G. 1979. A Polynomial Algorithm in Linear Programming. Soviet Math- ematics Doklady, 20, 191–194.

Kirkpatrick, S, Gelatt, C. D., & Vecchi, M. P. 1983. Optimization by Simulated An- nealing. Science, 220(4598), 671–680.

Konak, Abdullah, Coit, David W., & Smith, Alice E. 2006. Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineering & System Safety, 91(9), 992–1007.

Konopasek, M. 1981. Mathematical treatments of some apparel marking and cutting problems. Tech. rept. 26.

Korf, RE. 2003. An improved algorithm for optimal bin packing. Pages 1252–1258 of: International Joint Conferences on Artificial Intelligence Organization.

Kovalenko, A. A., Romanova, T. E., & Stetsyuk, P. I. 2015. Balance Layout Problem for 3D-Objects: Mathematical Model and Solution Methods. Cybernetics and Systems Analysis, 51(4), 556–565. 148 REFERENCES

Koza, John R. 1994. Genetic programming as a means for programming computers by natural selection. Statistics and Computing, 4(2), 87–112.

Kui Liu, Yong, Qiang Wang, Xiao, Zhe Bao, Shu, Gomboˇsi,Matej, & Zalik,ˇ Borut. 2007. An algorithm for polygon clipping, and for determining polygon intersections and unions. Computers and Geosciences, 33(5), 589–598.

Lai, K.K., & Chan, Jimmy W.M. 1997. Developing a simulated annealing algorithm for the cutting stock problem. Computers & Industrial Engineering, 32(1), 115–127.

Leao, Aline A.S., Toledo, Franklina M.B., Oliveira, Jos´e Fernando, & Carravilla, Maria Ant´onia.2016. A semi-continuous MIP model for the irregular strip packing problem. International Journal of Production Research, 54(3), 712–721.

Leinberger, W., Karypis, G., & Kumar, V. 1999. Multi-capacity bin packing algorithms with applications to job scheduling under multiple constraints. Proceedings of the 1999 International Conference on Parallel Processing, 404–412.

Lemus, Eduardo, Bribiesca, Ernesto, & Gardu˜no,Edgar. 2015. Surface trees – Represen- tation of boundary surfaces using a tree descriptor. Journal of Visual Communication and Image Representation, 31, 101–111.

Leung, Stephen C.H., Lin, Yangbin, & Zhang, Defu. 2012. Extended local search al- gorithm based on nonlinear programming for two-dimensional irregular strip packing problem. Computers & Operations Research, 39(3), 678–686.

Li, Zhenyu, & Milenkovic, Victor. 1995. Compaction and separation algorithms for non- convex polygons and their applications. European Journal of Operational Research, 84(3), 539–561.

Liao, Xiaoping, Ma, Junyan, Ou, Chengyi, Long, Fengying, & Liu, Xiangsha. 2016. Vi- sual nesting system for irregular cutting-stock problem based on rubber band packing algorithm. Advances in Mechanical Engineering, 8(6), 168781401665208.

Little, John D. C., Murty, Katta G., Sweeney, Dura W., & Karel, Caroline. 1963. An Algorithm for the Traveling Salesman Problem. Operations Research, 11(6), 972–989.

Liu, D.S., Tan, K.C., Huang, S.Y., Goh, C.K., & Ho, W.K. 2008. On solving multiobjec- tive bin packing problems using evolutionary particle swarm optimization. European Journal of Operational Research, 190(2), 357–382.

Liu, Jiamin, Yue, Yong, Dong, Zongran, Maple, Carsten, & Keech, Malcolm. 2011. A novel hybrid tabu search approach to container loading. Computers & Operations Research, 38(4), 797–807.

Liu, X, & Ye, J. 2011. Heuristic algorithm based on the principle of minimum total potential energy (HAPE): a new algorithm for nesting problems. Journal of Zhejiang University-SCIENCE A. REFERENCES 149

Liu, Xiao, Liu, Jia-Min, Cao, An-Xi, & Yao, Zhuang-Le. 2015. HAPE3D—a new con- structive algorithm for the 3D irregular packing problem. Frontiers of Information Technology & Electronic Engineering, 16(5), 380–390.

Lodi, Andrea, Martello, Silvano, & Monaci, Michele. 2002. Two-dimensional packing problems: A survey. European Journal of Operational Research, 141(2), 241–252.

L´opez-Camacho, Eunice, Ochoa, Gabriela, Terashima-Mar´ın, Hugo, & Burke, Ed- mund K. 2013. An effective heuristic for the two-dimensional irregular bin packing problem. Annals of Operations Research, 206(1), 241–264.

Louren¸co, Helena R., Martin, Olivier C., & St¨utzle,Thomas. 2010. Iterated Local Search: Framework and Applications. Springer, Boston, MA.

Mahadevan, A. 1984. Optimization in computer-aided pattern packing (marking, en- velopes). Ph.D. thesis, North Carolina State University.

Markowitz, Harry, & Manne, Alan. 1957. On the Solution of Discrete Programming Problems. Econometrica, 25(1), 84–110.

Martello, S, & Toth, P. 1990a. Knapsack problems: algorithms and computer implemen- tations. New York, NY, USA: John Wiley & Sons, Inc.

Martello, Silvano, & Toth, Paolo. 1990b. Lower bounds and reduction procedures for the bin packing problem. Discrete Applied Mathematics, 28(1), 59–70.

Martello, Silvano, Pisinger, David, & Toth, Paolo. 2000. New trends in exact algorithms for the 0-1 knapsack problem. European Journal of Operational Research, 123(2), 325–332.

Martinez-Sykora, A., Alvarez-Valdes, R., Bennell, J., Ruiz, R., & Tamarit, J.M. 2016. Matheuristics for the Irregular Bin Packing Problem with free rotations. European Journal of Operational Research.

Martinez-Sykora, A., Alvarez-Valdes, R., Bennell, J. A., Ruiz, R., & Tamarit, J. M. 2017. Matheuristics for the irregular bin packing problem with free rotations. European Journal of Operational Research, 258(2), 440–455.

Martinez-Sykora, Antonio. 2013. Nesting Problems : Exact and Heuristic Algorithms. Ph.D. thesis, University of Valencia.

Martinez-Sykora, Antonio, Alvarez-Valdes, Ramon, Bennell, Julia, & Tamarit, Jose Manuel. 2015. Constructive procedures to solve 2-dimensional bin packing prob- lems with irregular pieces and guillotine cuts. Omega (United Kingdom), 52, 15–32.

Mehlhorn, Kurt, & Simon, Klaus. 1985. Intersecting two polyhedra one of which is convex. Pages 534–542 of: Fundamentals of Computation Theory. Berlin/Heidelberg: Springer-Verlag. 150 REFERENCES

Milenkovic, Victor, Daniels, Karen, & Li, Zhenyu. 1992. Placement and Compaction of Nonconvex Polygons for Clothing Manufacture. Pages 1–8 of: Fourth Canadian Conference on Computational Geometry,.

Mladenovi´c,N, & Hansen, P. 1997. Variable neighborhood search. Computers & Oper- ations Research, 24(11), 1097–1100.

Mostofa Akbar, Md, Sohel Rahman, M., Kaykobad, M., Manning, E.G., & Shoja, G.C. 2006. Solving the Multidimensional Multiple-choice Knapsack Problem by construct- ing convex hulls. Computers & Operations Research, 33(5), 1259–1273.

Moura, Ana, & Oliveira, Jos´eFernando. 2003. A GRASP Approach to the Container Loading Problem. Building, 1–13.

Nielsen, BK. 2007. Nesting problems and Steiner tree problems. Ph.D. thesis, University of Copenhagen.

Oliveira, Jos´eF., Gomes, A. Miguel, & Ferreira, J. Soeiro. 2000. TOPOS – A new constructive algorithm for nesting problems. OR Spektrum, 22(2), 263.

Padberg, Manfred. 2000. Packing small boxes into a big box. Mathematical Methods of Operations Research (ZOR), 52(1), 1–21.

Pankratov, A V, Romanova, T E, Chugay, A M, Problems, Engineering, & Acad, Na- tional. 2015. Optimal packing of convex polytopes using quasi-phi-functions. Journal of Mechanical Engineering, 18(2), 55–64.

Parre˜no,F., Alvarez-Valdes, R., Tamarit, J. M., & Oliveira, J. F. 2008. A Maximal-Space Algorithm for the Container Loading Problem. INFORMS Journal on Computing, 20(3), 412–422.

Patt-Shamir, Boaz, & Rawitz, Dror. 2012. Vector bin packing with multiple-choice. Discrete Applied Mathematics, 160(10-11), 1591–1600.

Pisinger, David. 2005. Where are the hard knapsack problems? Computers & Operations Research, 32(9), 2271–2284.

Preparata, Franco P, & Shamos, Michael Ian. 1985. Computational Geometry. New York: Springer-Verlag.

Ramesh Babu, A., & Ramesh Babu, N. 2001. A generic approach for nesting of 2-D parts in 2-D sheets using genetic and heuristic algorithms. Computer-Aided Design, 33(12), 879–891.

Rietz, J¨urgen,Scheithauer, Guntram, & Terno, Johannes. 2002. Families of non-IRUP instances of the one-dimensional cutting stock problem. Discrete Applied Mathematics, 121(1), 229–245. REFERENCES 151

Robidoux, Nicolas, Stelldinger, Peer, & Cupitt, John. 2011. Simple random generation of smooth connected irregular shapes for cognitive studies. Pages 17–24 of: Proceed- ings of The Fourth International C* Conference on Computer Science and Software Engineering - C3S2E ’11. New York, New York, USA: ACM Press.

Rocha, Pedro, Rodrigues, Rui, Gomes, A. Miguel, Toledo, Franklina M B, & Andretta, Marina. 2014. Circle covering representation for nesting problems with continuous rotations. Pages 5235–5240 of: IFAC Proceedings Volumes (IFAC-PapersOnline), vol. 19. Elsevier.

Romanova, T., Bennell, J., Stoyan, Y., & Pankratov, A. 2018. Packing of concave polyhedra with continuous rotations using nonlinear optimisation. European Journal of Operational Research, 268(1), 37–53.

S´anchez-Cruz, Hermilo, L´opez-Valdez, Hiram H., & Cuevas, Francisco J. 2014. A new relative chain code in 3D. Pattern Recognition, 47(2), 769–788.

Sbihi, Abdelkader. 2006. A best first search exact algorithm for the Multiple-choice Multidimensional Knapsack Problem. Journal of Combinatorial Optimization, 13(4), 337–351.

Scheithauer, G., Stoyan, Yu. G., & Romanova, T. Ye. 2005. Mathematical Modeling of Interactions of Primary Geometric 3D Objects. Cybernetics and Systems Analysis, 41(3), 332–342.

Scheithauer, Gubtram, & Sommerweiß, Uta. 1998. 4-Block heuristic for the rectangle packing problem. European Journal of Operational Research, 108(3), 509–526.

Schreiber, EL, & Korf, RE. 2013. Improved Bin Completion for Optimal Bin Packing and Number Partitioning. IJCAI.

Schwarz, Michael, & Seidel, Hans-Peter. 2010. Fast parallel surface and solid voxelization on GPUs.

Segenreich, Solly Andy, & Faria Braga, Leda Maria P. 1986. Optimal nesting of general plane figures: A Monte Carlo heuristical approach. Computers and Graphics, 10(3), 229–237.

Shachnai, Hadas, & Tamir, Tami. 2012. Approximation schemes for generalized two- dimensional vector packing with application to data placement. Journal of Discrete Algorithms, 10, 35–48.

Silva, Elsa, Oliveira, Jos´eF., & W¨ascher, Gerhard. 2016. The pallet loading problem: a review of solution methods and computational experiments. International Transac- tions in Operational Research, 23(1-2), 147–172.

S¨orensen,Kenneth. 2015. Metaheuristics-the metaphor exposed. International Trans- actions in Operational Research, 22(1), 3–18. 152 REFERENCES

S¨orensen,Kenneth, & Glover, Fred W. 2013. Metaheuristics. Pages 960–970 of: En- cyclopedia of Operations Research and Management Science. Boston, MA: Springer US.

Spieksma, Frits C.R. 1994. A branch-and-bound algorithm for the two-dimensional vector packing problem. Computers & Operations Research, 21(1), 19–25.

Stoyan, Y, Ternoy, J, Scheithauer, G, Gil, N, & Romanova, T. 2001. Phi-functions for primary 2D-objects. Studia Informatica Universalis, 2 (1), 1–32.

Stoyan, Y., Pankratov, A., & Romanova, T. 2016a. Quasi-phi-functions and optimal packing of ellipses. Journal of Global Optimization, 65(2), 283–307.

Stoyan, Y, Pankratov, A, & Romanova, T. 2016b. Quasi-phi-functions and optimal packing of ellipses. Journal of Global Optimization, 65(2), 283–307.

Stoyan, Y. G., Gil, N.I., Pankratov, A., & Scheithauer, G. 2004. Packing non-convex polytopes into a parallelepiped. Tech. rept.

Stoyan, Y. G., Gil, N. I., Scheithauer, G., Pankratov, A., & Magdalina, I. 2005. Packing of convex polytopes into a parallelepiped. Optimization: A Journal of Mathematical Programming and Operations Research, 54(2), 215–235.

Stoyan, YG, & Chugay, AM. 2012. Mathematical modeling of the interaction of non- oriented convex polytopes. Cybernetics and Systems Analysis.

Stoyan, YG, & Chugay, AM. 2014. Packing Different Cuboids with Rotations and Spheres into a Cuboid. Advances in Decision Sciences, 1–27.

Stoyan, Yu. G., & Yaskov, G. N. 1983. Mathematical methods for geometric design. Pages 67–86 of: Proceedings of International Conference PROLAMAT 82.

Stoyan, Yu G., Novozhilova, M. V., & Kartashov, A. V. 1996. Mathematical model and method of searching for a local extremum for the non-convex oriented polygons allocation problem. European Journal of Operational Research, 92(1), 193–210.

Stoyan, Yuriy, Pankratov, Alexander, & Romanova, Tatiana. 2016c. Cutting and packing problems for irregular objects with continuous rotations: mathematical modelling and non-linear optimization. Journal of the Operational Research Society, 67(5), 786–800.

Talbi, El-Ghazali. 2009. Metaheuristics: from design to implementation. Vol. 74. John Wiley & Sons.

Teng, Hong-fei, Sun, Shou-lin, Liu, De-quan, & Li, Yan-zhao. 2001. Layout optimization for the objects located within a rotating vessel — a three-dimensional packing problem with behavioral constraints. Computers & Operations Research, 28(6), 521–535. REFERENCES 153

Terashima-Mar´ın,H., Ross, P., Far´ıas-Z´arate,C. J., L´opez-Camacho, E., & Valenzuela- Rend´on,M. 2010. Generalized hyper-heuristics for solving 2D Regular and Irregular Packing Problems. Annals of Operations Research, 179(1), 369–392.

Thomas, F., & Torras, C. 1994. Interference detection between non-convex polyhedra revisited with a practical aim. Pages 587–594 of: Proceedings of the 1994 IEEE International Conference on Robotics and Automation. IEEE Comput. Soc. Press.

Toledo, Franklina M.B., Carravilla, Maria Ant´onia,Ribeiro, Cristina, Oliveira, Jos´eF., & Gomes, A. Miguel. 2013. The Dotted-Board Model: A new MIP model for nesting irregular shapes. International Journal of Production Economics, 145(2), 478–487.

Umetani, Shunji, Yagiura, Mutsunori, Imahori, Shinji, Imamichi, Takashi, Nonobe, Koji, & Ibaraki, Toshihide. 2009. Solving the irregular strip packing problem via guided local search for overlap minimization. International Transactions in Operational Research, 16(6), 661–683.

Vanderbeck, F. 1999. Computational study of a column generation algorithm for bin packing and cutting stock problems. Mathematical Programming, 86(3), 565–594.

Wang, Yingcong, Xiao, Renbin, & Wang, Huimin. 2017. A flexible labour division approach to the polygon packing problem based on space allocation. International Journal of Production Research, 55(11), 3025–3045.

W¨ascher, Gerhard, Haußner, Heike, & Schumann, Holger. 2007. An improved typology of cutting and packing problems. European Journal of Operational Research, 183, 1109–1130.

Wauters, T, Uyttersprot, S, & Esprit, E. JNFP: a robust and open-source Java based nofit polygon generator library.

Whitley, Darrell. 1994. A genetic algorithm tutorial. Statistics and Computing, 4(2).

Williams, Barbara, & Hicks, Frederic. 2011. El Codice Vergara : edicion facsimilar con comentario : pintura indigena de casas, campos y organizacion social de Tepetlaoz- toc a mediados del siglo XVI. Universidad Nacional Autonoma de Mexico Apoyo al Desarrollo de Archivos y Bibliotecas de Mexico, A.C.

Williams, Barbara J, & Harvey, H R. 1988. Content , Provenience , and Significance of the Codex Vergara and the Codice de Santa Maria Asuncion. American Archaeology, 53(2), 337–351.

Williams, Barbara J, & Jorge y Jorge, Mar´ıaDel Carmen. 2008. Aztec arithmetic revis- ited: land-area algorithms and Acolhua congruence arithmetic. Science, 320(5872), 72–77. 154 REFERENCES

Zhao, Xiaozhou, Bennell, Julia a., Bekta¸s,Tolga, & Dowsland, Kath. 2016. A com- parative review of 3D container loading algorithms. International Transactions in Operational Research, 23(1-2), 287–320.

Zhou, Aimin, Qu, Bo-Yang, Li, Hui, Zhao, Shi-Zheng, Suganthan, Ponnuthurai Na- garatnam, & Zhang, Qingfu. 2011. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation, 1(1), 32–49.

Zhu, Wenbin, Huang, Weili, & Lim, Andrew. 2012. A prototype column generation strategy for the multiple container loading problem. European Journal of Operational Research, 223(1), 27–39.