Optimizing Constraint Solving Via Dynamic Programming
Total Page:16
File Type:pdf, Size:1020Kb
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Optimizing Constraint Solving via Dynamic Programming Shu Lin1∗ , Na Meng2 and Wenxin Li1 1Department of Computer Science and Technology, Peking University, Beijing, China 2Department of Computer Science, Virginia Tech, Blacksburg, VA, USA [email protected], [email protected], [email protected] Abstract Dynamic programming (DP) is a classical method for solv- ing complex problems. Given a problem, DP decomposes Constraint optimization problems (COP) on finite it into simpler subproblems, solves each subproblem once, domains are typically solved via search. Many stores their solutions with a table, and conducts table lookup problems (e.g., 0-1 knapsack) involve redundant when a subproblem reoccurs [Bertsekas, 2000]. Although DP search, making a general constraint solver revisit seems a nice search algorithm for COP solutions, we have the same subproblems again and again. Existing not seen it to be used in solving general constraint models. approaches use caching, symmetry breaking, sub- Therefore, this paper explores how well DP helps optimize problem dominance, or search with decomposition constraint solving. There are three major research challenges: to prune the search space of constraint problems. In this paper we present a different approach— • Given a COP, how can we automatically decide whether DP Solver—which uses dynamic programming the problem is efficiently solvable with DP? (DP) to efficiently solve certain types of constraint optimization problems (COPs). Given a COP mod- • If a problem can be solved with DP, how can we imple- eled with MiniZinc, DP Solver first analyzes the ment the DP search? model to decide whether the problem is efficiently solvable with DP. If so, DP Solver refactors the • When there are multiple ways to conduct DP search for a constraints and objective functions to model the problem, how can we automatically choose the best one problem as a DP problem. Finally, DP Solver with optimal search performance? feeds the refactored model to Gecode—a widely used constraint solver—for the optimal solution. To address these challenges, we designed and imple- Our evaluation shows that DP Solver significantly mented a novel approach DP Solver, which opportunisti- improves the performance of constraint solving. cally accelerates constraint solving in a non-intrusive way. Specifically, given a COP described in MiniZinc—a widely used constraint modeling language [Nethercote et al., 2007], 1 Introduction DP Solver determines whether the problem has (1) opti- When solving a constraint optimization problem (COP), a mal substructures and (2) overlapping subproblems; if so, general solver searches for an assignment of variables to (1) the problem is efficiently solvable with DP. Next, for each satisfy the constraints on those variables and (2) optimize solvable problem, DP Solver converts the original model to an objective function. Such search may require repetitive a DP-oriented model, such that a general constraint solver computation when different branches on a search tree lead (i.e., Gecode [Schulte et al., 2006]) essentially conducts DP to the same subproblem. The unnecessary redundant work search when processing the new model. Third, if multiple can cause the worst case complexity of search to be O(M N ), DP-oriented models are feasible, DP Solver estimates the where N is the number of variables for assignments and M computation complexity of each model to choose the fastest is the number of branches at each node. one. Existing methods avoid or reduce redundant search via We applied DP Solver to nine optimization problems, caching [Smith, 2005], symmetry breaking [Gent et al., including seven DP problems and two non-DP ones. 2006], subproblem dominance [Chu et al., 2012], problem DP Solver significantly speeded up the constraint solving decomposition [Kitching and Bacchus, 2007], branch-and- process for all problems. We also applied two state-of- bound pruning [Marinescu and Dechter, 2005], lazy clause the-art optimized constraint solvers—CHUFFEDC (caching) generation [Ohrimenko et al., 2009], or auto-tabling [Dekker and CHUFFEDL (LCG)—to the same dataset for compari- et al., 2017; Zhou et al., 2015]. son, and observed that DP Solver significantly outperformed both techniques. We open sourced DP Solver and our eval- ∗Contact Author uation data set at https://github.com/fzlinshu/DPSolver. 1146 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) % Input arguments calculation. If a COP has both properties, we name it a DP int: N; problem. int: C; array[1..N] of int: V; Given a COP modeled in MiniZinc, DP Solver recognizes array[1..N] of int: W; a DP problem by taking three steps: 1) identifying any array variable and accumulative function applied to those array el- % Variables var set of 1..N: knapsack; ements, 2) converting constraints and objective functions to recursive functions of array elements (i.e., subproblems), and % Constraints constraint sum (i in knapsack)(W[i]) <= C; 3) checking recursive functions for the two properties. % Objective function Step 1: Candidate Identification solve maximize sum (i in knapsack)(V[i]); DP Solver checks for any declared variable of the array data type because DP is usually applied to arrays. Additionally, if a variable is a set, as shown by the knapsack variable in Figure 1: Model of the 0-1 knapsack problem Figure 1, DP Solver converts it to a boolean array b such that “b[i]” indicates whether the ith item in the set is chosen or 2 A Motivating Example not. By doing so, DP Solver can also handle problems with set variables. We name the identified array or set variables To facilitate discussion, we introduce the 0-1 knapsack prob- candidate variables. lem as an exemplar problem efficiently solvable with DP. Next, DP Solver checks whether any candidate variable Problem Description: There are N items. There is is used in at least one constraint and one objective function; a knapsack of capacity C. The ith item (i 2 [1;N]) if so, DP Solver may find optimization opportunities when has value Vi and weight Wi. Put a set of items S ⊆ enumerating all value assignments to the variable. In Fig- f1;:::;Ng in the knapsack, such that the sum of ure 1, both the constraint and objective can be treated as func- the weights is at most C while the sum of values is tions of the newly created array b as below: maximal. N X The 0-1 knapsack problem is a typical COP and can be mod- b[i] ∗ W [i] ≤ C, where b[i] 2 f0; 1g (3.1) eled with MiniZinc in the following way (see Figure 1): i=1 Given the above model description, a general constraint N solver (e.g., Gecode) typically enumerates all possible sub- X sets of N to find the maximum value summation. Such na¨ıve maximize b[i] ∗ V [i], where b[i] 2 f0; 1g (3.2) search has O(2N ) time complexity. Our research intends to i=1 significantly reduce this complexity. When these functions have any accumulative operator (e.g., sum), it is feasible to further break down the problem into 3 Approach subproblems. Thus, DP Solver treats the variable b together with related functions as a candidate for DP application. DP Solver consists of three phases: DP problem recogni- tion (Section 3.1), DP-oriented model description generation Step 2: Function Conversion (Section 3.2), and description selection (Section 3.3). Phase DP Solver tentatively converts relevant constraint and ob- II is taken only when Phase I identifies one or more solvable jective functions to step-wise recursive functions in order to problems in a given MiniZinc model; and Phase III is taken identify subproblems and prepare for further property check. only when Phase II generates multiple alternative models. Specifically, DP Solver unfolds all accumulative functions recursively. For instance, the constraint formula (3.1) can be 3.1 Phase I: DP Problem Recognition converted to If a problem can be efficiently solved with dynamic program- f (W; b) = 0 ming, it must have two properties [Cormen et al., 2009]: 0 f1(W; b) = f0(W; b) + b[1] ∗ W [1] P1. Optimal Substructures. An optimal solution can be c (W; b) = C − f (W; b) constructed from optimal solutions of its subproblems. This 1 1 property ensures the usability of DP, because DP saves and c1(W; b) ≥ 0 uses only the optimal instead of all solutions to subproblems ::: (3.3) for optima calculation. fN (W; b) = fN−1(W; b) + b[N] ∗ W [N] P2. Overlapping Subproblems. When a problem is de- cN (W; b) = C − fN (W; b) composed into subproblems, some subproblems are repeti- c (W; b) ≥ 0 tively solved. This property ensures the usefulness of DP, be- N cause by memoizing solutions to subproblems, DP can elim- Here, fi(W; b)(i 2 [1;N]) computes weight sums for the inate repetitive computation. first i items given (1) the item weight array W and (2) the boolean array b. The function ci(W; b) subtracts fi(W; b) Essentially, DP is applicable to a COP when optimal so- from capacity C, to define the constraint for each subprob- lutions to subproblems can be repetitively reused for optima lem. Here, ci means “the remaining capacity limit for the last 1147 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (N − i) items”. The value of ci helps decide whether an item Similarly, we defined and proved a related theorem when should be added to the knapsack. Namely, if the weight of the max function used in Theorem 3.1 is replaced with min. th the (i + 1) item is greater than ci’s value, the item should Furthermore, there are problems whose opti(·) functions are not be added.