PONTIFICIA UNIVERSIDAD CATOLICA´ DE VALPARAISO´ FACULTAD DE INGENIERIA´ ESCUELA DE INGENIERIA´ INFORMATICA´

“SOLVING COMBINATORIAL PROBLEM USING ADAPTIVE PROCESS OF ENUMERATION STRATEGIES”

MARY CLAUDIA ARANDA CABEZAS

TESIS DE GRADO MAGISTER´ EN INGENIERIA´ INFORMATICA´

Julio 2008 Pontificia Universidad Catolica´ de Valpara´ıso Facultad de Ingenier´ıa Escuela de Ingenier´ıa Informatica´

“SOLVING COMBINATORIAL PROBLEM USING ADAPTIVE PROCESS OF ENUMERATION STRATEGIES”

MARY CLAUDIA ARANDA CABEZAS

Profesor Gu´ıa: Broderick Crawford Labr´ın

Programa: Mag´ıster en Ingenier´ıa Informatica´

Julio 2008 Resumen

En la actualidad, la comunidad de Programacion´ con Restricciones utiliza una aproximacion´ completa de resolucion´ que alterna fases de propagacion´ de restricciones y enumeracion,´ en este contexto se han realizado numerosos estudios tendientes a mostrar la efectividad de las estrategias de enumeracion´ y el efecto sobre el proceso de resolucion´ que tienen las heur´ısticas de seleccion´ de variable y valor que la constituyen. Sin embargo, es imposible prever los efectos reales de estas estrategias. Por otra parte, si bien estudios previos han demostrado que diferentes estrategias tienen rendimientos significativamente diferentes, por lo cual es crucial seleccionar una buena estrategia de enumeracion,´ no es posible determinar una estrategia de enumeracion´ que sea la mejor para un conjunto amplio de problemas, de esta manera el proposito´ de este proyecto es evaluar un proceso de resolucion´ adaptativo que permitan encontrar soluciones a diferentes problemas, donde las posibilidades de adaptacion´ tienen relacion´ con cambiar la estrategia de enumeracion´ utilizada al momento de detectar su mal rendimiento durante el proceso. La deteccion´ del mal rendimiento de la estrategia de enumeracion´ se realiza mediante la observacion´ continua del proceso de resolucion,´ desde donde se obtiene informacion´ relevante del estado de la resolucion.´

i Abstract

At the present, the Constraint Programming community uses a complete approach alternating phases of Constraint Propagation and Enumeration, in this context numerous studies have been conducted designed to show the effective- ness of enumeration strategies and the effect on resolution process that variable and value selection heuristics have. However, the effects and efficiencies of strategies are generally unpredictable. Moreover, although previous studies have shown that different strategies have significantly different performance, for which it is crucial to select a good enumeration strategy, but it is not possible to determine an enumeration strategy that is best for a set of problems. Thus, the purpose of this project is to evaluate an adaptive resolution process to find solutions to a broad spectrum of different types of problems, where the ability to adapt is related to changing the strategy used at the time of detecting its poor performance during the process. The detection of bad performance of enumeration strategy is done through continuous observation of resolution process, where you get relevant information of the state of resolution.

ii Contents

List of Figures vi

List of Tables vii

1 Project Description 1 1.1 Introduction ...... 1 1.2 Goals ...... 2 1.2.1 Hypothesis ...... 2 1.2.2 General Goal ...... 2 1.2.3 Specific Goals ...... 2 1.3 Methodology ...... 3 1.4 Problem Definition and Motivation ...... 4 1.5 Outline of this Thesis ...... 6

2 Constraint Programming 7 2.1 Definition ...... 7 2.2 Combinatorial Problems ...... 7 2.2.1 Constraint Satisfaction Problems ...... 8 2.2.2 Constraint Satisfaction Optimisation Problems ...... 8 2.3 Modelling ...... 9 2.4 CSP Solving ...... 10 2.4.1 Basic Search Strategies for Solving CSPs ...... 10 2.4.1.1 General Search Strategies ...... 10 2.4.1.2 Hybrid Techniques ...... 11 2.4.2 Consistency Techniques ...... 13 2.4.2.1 Node Consistency ...... 13 2.4.2.2 Arc Consistency ...... 14 2.4.2.3 Path Consistency ...... 16

iii 2.5 Constraint Propagation and Enumeration ...... 17 2.5.1 Variable Selection Heuristics ...... 18 2.5.1.1 Static Selection Heuristics ...... 18 2.5.1.2 Dynamic Selection Heuristics ...... 19 2.5.2 Value Selection Heuristics ...... 20

3 Adaptive 23 3.1 Elemental Definitions ...... 23 3.2 Adaptive Model ...... 25 3.2.1 Adaptive Constraint Satisfaction ...... 26 3.2.2 Adaptive Enumeration Strategies ...... 27 3.2.3 Adaptive Constraint Engine ...... 28

4 Adaptive Approach Based on Enumeration Strategies 30 4.1 Scheme Proposed ...... 30 4.1.1 Solver: ...... 31 4.1.2 Library Strategies ...... 32 4.1.3 Observation ...... 32 4.1.4 Analysis ...... 32 4.2 Enumeration Strategies ...... 33 4.3 Indicators ...... 34 4.4 Problems ...... 37 4.4.1 N-Queens ...... 37 4.4.1.1 Model ...... 37 4.4.1.2 Script ...... 37 4.4.2 Square ...... 38 4.4.2.1 Model ...... 38 4.4.2.2 Script ...... 39 4.4.3 ...... 40 4.4.3.1 Model ...... 40 4.4.3.2 Script ...... 40 4.4.4 ...... 41 4.4.4.1 Model ...... 41 4.4.4.2 Script ...... 41

iv 5 Experimental Results 43 5.1 Initial Considerations ...... 43 5.2 Enumeration Strategies Alone vs Adaptive Process ...... 43 5.3 Tuning of the Bracktracking ...... 44

6 Conclusion and Future Work 45

Bibliography 46

v List of Figures

1.1 Constraint Programming Structure ...... 4 1.2 10-Queens solving with 3 Different Strategies ...... 5

2.1 Example: No Node Consistency ...... 13 2.2 Example: Node Consistency ...... 14 2.3 Example: No Arc Consistent CSP ...... 15

2.4 Directional Arc Consistency under the order x2 → x1 ...... 15 2.5 Arc Consistency CSP ...... 15 2.6 No Path Consistency CSP ...... 16 2.7 Path Consistency CSP ...... 17

3.1 Taxonomy of parameter control [1] ...... 24 3.2 Adaptive Constraint Satisfaction [13] ...... 26 3.3 The Dynamic Strategy Framework [34] ...... 27 3.4 Organization and Management Advisers to make a decision [17] ...... 29

4.1 Scheme of Adaptive Solving Process ...... 31

vi List of Tables

1.1 Enumerations (E), Backtracks (B) in the resolution of 10-Queens and Magic Squares (N=9) ...... 5 1.2 Enumerations (E), Backtracks (B) in the resolution of 20-Queens and Magic Squares (N=16) ..... 6

2.1 Justification Variable Selection Heuristics ...... 22 2.2 Justification for Value Selection Heuristics ...... 22

5.1 N-Queens: Enumeration Strategies Alone vs Adaptive Process measuring Backtrack (B) ...... 44 5.2 Latin Square: Enumeration Strategies Alone vs Adaptive Process measuring Backtrack (B) ...... 44 5.3 Tuning of the bracktracking to adaptive process in N-Queens problems ...... 44

vii Chapter 1

Project Description

1.1 Introduction

Constraint Programming (CP) has been defined as a software technology used to describe and effectively solve large and complex problems, particularly combinatorial [5, 7]; these problems can be modelled as Constraint Satisfaction Problem (CSP) [2, 48], which is defined by a set of variables, each one with a domain associated and a set of con- straints. The domain corresponds to a set of possible values that can be assigned to the variables. Each constraint is defined on a group of variables restricting the combination of values that can be assigned to these variables. Solving a CSP consists in finding a value that can be assigned to the variable in a way that satisfies all the constraints [54].

For the resolution of many of these combinatorial problems, the community of Constraint Programming uses a complete approach that consists in alternating phases of enumeration and propagation of constraints [2, 15, 54, 34], the process involves exploring systematically all of the possible assignments values to variables, forming a search tree. When an enumeration phase is executed, basically two questions must be answered: which variable to choose to enu- merate and which value is going to be assigned to the selected variable. In order to answer the first question presented, it is necessary to use the variable selection heuristics, in charge of determining the order that variables are enumerated, and for the second question it is the use of the value selection heuristics. Both heuristics as a whole constitute what is known as Enumeration Strategy [51, 2, 10, 20, 47], which takes care of guiding the enumeration phase, and in the CP environment are very crucial to the performance of the resolution process [3, 20, 21, 25], and these might substantially decrease the cost of searching for a solution.

In general, there are many papers about enumeration strategies, some of those focus their efforts to establish static’s strategies of enumeration, that is, the enumeration strategy is determined only one time before the resolution process start and remains unchanged during the whole process, however to establish a good enumeration strategy before initi- ating the search is a very expensive job and the results are usually unpredictable. On the other hand, the fact that the enumeration strategy remains constant during the whole process of solving a CSP, represents an important limitation

1 if the goal is to resolve in an efficient way different kinds of problems, because a certain strategy works efficiently for a fairly limited set of problems, and in the worst case scenario for a unique problem, where usually exists a unique combination problem-solver that solves the problem efficiently.

In order to take care of the inconvenience described in the previous paragraph, this work presents the possibility to have a resolution process that can be adapted, capable of determining, based on information generated in this particular process, if the current enumeration strategy that is being used, is having a good performance. In case it fails, it must be capable of detecting this particular situation and change the current strategy for another one available that might improve the performance of the resolution process. To detect the efficiency of the strategy it is necessary to perform continuous observations over the resolution process, that is the source of relevant information about the status of the resolution (level of improvement). This type of information has different names such as metrics [17], monitors [13], and snapshots [34].

1.2 Goals

1.2.1 Hypothesis

The dynamic adjustment of solving process (Enumeration Strategy), based in the obtained information during its processing, will allow to improve the computational behavior and find solutions to differet problems, regardless of the kind of problem to solve.

1.2.2 General Goal

To design and evaluate an adaptive mechanism of Constraint Satisfaction guided by information obtained during solv- ing process.

1.2.3 Specific Goals

1. Understanding the theoretical basis of the adaptive approaches that exist in the literature.

2. Understanding the theoretical context in detecting information generated during the implementation of solving process, and definition and use of such information as indicators of states.

3. Definition of representative and relevant indicators of solving process.

4. Model with Constraint Programming of benchmark combinatorial problem.

5. Design of enumeration strategies to use for the solving process.

6. Evaluation of an adaptive mechanism and analysis of the results.

2 1.3 Methodology

For the project development and preparation of various reports associated with it, it will be a theoretical study in the first step, which includes the use of books and documents related to the academic nature of the subject project. The academic documents include papers presented at conferences related to the topic, documents published in national and international universities, or material given in subjects such universities.

With the theoretical study it seeks to obtain information relevant to the issue and try to get a solid basis for the creation of bibliographical references.

In relation to practice or implementation, it hopes to develop the adaptive proposal raised in this work through the system software ECLiPSe. At this stage it is important to consider the particular characteristics of development, which differ from traditional development, and employ in the most appropriate way possible an agile development approach.

Then, the work methodology can be divided into the following four topics:

1. Analysis of all the theoretical aspects involved.

2. Design of adaptive process based on enumeration strategies, including the design of: the enumeration strategies to implement, problems to solve, measuring indicators and the integration of each of these elements.

3. Implementation of the process designed in the previous phase.

4. Performing tests and analysis results.

3 1.4 Problem Definition and Motivation

In the underlying structure to CP shown in Figure 1.1, modelling involves the phase that represents a Constraint Satis- faction Problem, with a set of variables, a domain of values for each variable and a set of constraints that delimit the set of values that variables can take simultaneously. Then, in the search phase, the community of Constraint Program- ming at the moment uses a full approach that involves switching phases of enumeration and constraint propagation [2, 15, 54, 34], where the constraint propagation prunes the search tree erasing values that can’t be part of the solution, and the enumeration splits the original CSP into 2 smaller CSPs until it reaches a failed or solve CSP, for these creates a branch by labelling to a x variable of the problem a v value of its domain (x = v) and another one (x = v) when the first branch can’t be satisfied [45, 15]. This way, when an enumeration phase is executed 2 situations must be solved: which variable to select to enumerate and which value to select to assign to the variable. To answer the first question is used the variable selection heuristics to determine the organization to enumerate the variables, and for the second question it’s used the value selection heuristics. Both heuristics as a whole constitute what is called as Enumeration Strategy [51, 2, 10, 20, 47], which guides the enumeration phase, and in the environment of Constraint Programming are crucial in the performance of the resolution process [3, 20, 21, 25], and might decrease substantially the cost of research of a solution.

⎧ ⎧ ⎪ ⎨ ⎪ Variable ⎪ ⎪ Model⎩ Domains ⎪ ⎪ Constraints ⎨⎪ CP ⎪ ⎧ ⎪ ⎪ ⎪ ⎨⎪ Propagation ⎪ ⎪ Search ⎪ ⎪ V ariable Selection Heurictics ⎩⎪ ⎩⎪ Enumeration V alue Selection Heuristics

Figure 1.1: Constraint Programming Structure

The importance of enumeration strategies and its effect in the resolution process can be noticed in Figure 1.2, where the graphics results obtained in the resolution of the 10-Queens [2, 48] problem using 3 different enumeration strategies are displayed.

Figure 1.2, clearly reflects the fact that different strategies have significantly different performances, and because of that an adequate election of an enumeration strategy is crucial in the resolution process [9], however the best strategy cannot be predicted. Neither it is possible to establish a good strategy for a broad set of problems in a general way, which are reflected in the results obtained resolving several problems with an identical series of strategies.

4 Figure 1.2: 10-Queens solving with 3 Different Strategies

Tables 1.1 and 1.2 show the results obtained in the resolution of instances of N-Queens and Magic Square employ- ing the same series of strategies. It is possible to determine here, considering the measuring of the number of backtrack and the number of enumeration, that a strategy of enumeration capable of solving both problems efficiently does not exist, rather that one that has good performance in a problem probably doesn’t in the resolution of another considered problem. The strategies considered in the example are structured the following way:

• S1 = Minimum Domain Size + Smaller Value of the Domain

• S2 = Minimum Domain Size + Average Value of the Domain

• S3 = Maximum Domain Size + Smaller Value of the Domain

• S4 = Maximum Domain Size + Average Value of the Domain

Table 1.1: Enumerations (E), Backtracks (B) in the resolution of 10-Queens and Magic Squares (N=9) Strategies N-Queenss Magic Squares (N=10) (N=9) (E) (B) (E) (B) S1 17 24 1 4 S2 0 6 5 8 S3 807 820 7 16 S4 1 6 10 22

5 Table 1.2: Enumerations (E), Backtracks (B) in the resolution of 20-Queens and Magic Squares (N=16) Strategies N-Queens Magic Squares (N=20) (N=16) (E) (B) (E) (B) S1 60 76 717 741 S2 27 41 2449 2465 S3 650717 650752 45043 45097 S4 252952 252979 847055 847094

Attending to what’s been shown previously, this project plans to implement a resolution process that can be adapted and that allows to find solutions to different problems efficiently. The possibilities of adaptation are related to changing the enumeration strategy used according to their performance in the resolution of the problem, its performance will be measured based on the information generated in the resolution process. In this way it is expected to be able to solve different problems without getting conditioned to an only combination problem-solver.

1.5 Outline of this Thesis

The thesis begins with a detailed presentation of constraint programming (Chapter 2). It then details the topic of adaptation (Chapter 3). This includes a brief analysis of the need of adaptation in CSP solving and presentation of some adaptative approaches. The next chapter (Chapter 4) we present an adaptive schema to assess the resolution process in various problems. Chapter 5 some experimental results were described. The last chapter (Chapter 6), concludes the thesis and describes the future work .

6 Chapter 2

Constraint Programming

2.1 Definition

Constraint Programming is a programming paradigm where the relationship between variables may be specified in each form of constraint. Unlike other paradigms, it doesn’t indicate the sequence of steps to achieve a solution, but specifies that the property must have a solution. To solve a problem by Constraint Programming, it is first formulated as a CSP, part of this method is called modelling of the problem, then proceeds to solve the problem by any particular method [2]. Several systems resolutions of Constraint Programming can be found, specialized for different kinds of problems. For example, the Simplex Method to solve linear problems [16] or local search to solve boolean satisfiability problem (SAT) [23]. This work will focus on Constraint Programming Systems based on finite domains , this is, systems that solve problems using constraint propagation, involving entire variables in a finite domain [42]. These systems operate on the basis of two components, constraint propagation and enumeration.

2.2 Combinatorial Problems

Combinatorial Problems differentiate from another types of problems, roughly by the fact that at least one of the prob- lems variables is restricted to a finite set of discrete values. We find in the literature two different types of combinatorial problems, constraint satisfaction problems whose objective is simply to find a solution satisfying all constraints, and the constraint satisfaction optimisation problems which have to find the best solution, according to some criterion, among all feasible solutions.

7 2.2.1 Constraint Satisfaction Problems

A constraint satisfaction problem (CSP) is a problem with a finite set of variables X = x1, x2,..., xn, each one associated to a finite domain Dxi ,...,Dxn , and a finite set C of constraints that restricts the values that varibales can take simultaneously [48]. Where the task is to assign a value to each variable such that all the constraints are satisfied. The notation used for a CSP consists of the following [54]:

∈ ∈  C; xi Dx1 , ..., xn Dxn

In this way, the task in a CSP is to assign a value to each variable such that all the constraints are satisfied simulta- neously, and depending of the requirements of an application it is wanted to find one or more solutions, either obtains an optimal solution on the basis of an objective function previously defined. In this particular case the subject is CSOP.

We now define formally some basic concepts from constraint satisfaction problems that are necessary for the understanding of this thesis.

Definition 2.1. A label is a variable-value pair that represents the assignment of the value to the variable [48]. It uses

(xi, a) to denote the label of assigning the value a to the variable xi. A tuple or a compound label is the assignment of values to a set of variables, this is ((x1, a1), (x2, a2),...,(xn, an)) to denote the assigning a1, a2,..., an to x1, x2, ..., xn respectively.

Definition 2.2. A solution is an assignment of values to all the variables such that all constraints are satisfied, this is, a solution is a consistent tuple containing all variables in the CSP. On the other hand, a partial solution is a consistent tuple containing some of the variables [48].

2.2.2 Constraint Satisfaction Optimisation Problems

A constraint satisfaction optimisation problem (CSOP) is defined as a CSP together with an objective function f which maps every solution tuple to a numerical value [48, 54]. The objective function value is often represented by a variable z, together with the constraint maximize z or minimize z for a maximization or a minimization problem, respectively. A CSP is feasible to represent as CSOP using a process as described below[50]:

Assume that it’s wanted to minimize f. The objective function value is represented by a variable z, then when it’s found a solution to the CSP, the corresponding value of z, say z = β, serves as an upper bound for the optimal value of f. We then add the constraint z<βto all CSPs in the search tree and continue.

8 Definition 2.3. A solution, α, is preferred with respect to another solution, α, if the value of the objective function f under α is better than the value under α. Basic tasks in CSOPs are:

• Finding out if a solution exists

• Finding one solution

• Enumerating all solutions

• Finding the best solution with respect to some criterion

2.3 Modelling

The model given to a problem represents an abstraction of reality, and the level of detail of this abstraction can deter- mine the feasibility of the given model and the difficulty for solving it. This abstraction is given by the following:

Definition 2.4 (Definition of Variable). A decision variable is a pair x, D, where x is a variable (symbol) and D is its domain, defined to be its range of possible values. There are many types of decision variables, distinguished by the type of its domain. When the domain contains numbers only, the variables are called numerical variables. When the domain contains boolean values only, the variables are called boolean variables, and when the domain contains an enumerated type of objects, the variables are called symbolic variables [48]. For example:

• Boolean: It can be either true or false.

• Discrete: It can take on integer values, e.g. 1 to 5.

• Continuous: It can take on real values in one of the intervals:(−∞,b], (−∞;+∞), [a, +∞) or [a, b]

• Symbolic: It can take on defined values, e.g. if x represents a day of the week, it can take on values of Sunday, Monday,Tuesday, Wednesday, Thursday, Friday, Saturday.

Definition 2.5 (Definition of Constraint). A constraint is a relation between a set of variables, which restricts the values that variables can take simultaneously. A constraint can be composed of one or more variables, where the number of variables involved is called arity.

• Unary constraint, consists of a single variable

Example: x1 < 7

• Binary constraint, consists of two variables

Example: x3 + x4 =3

• No binary constraint or n-ary, involves an arbitrary number of 3 or more variables

Example: x1 +3x2 − x3 + x4 ≤ 10

9 A CSP with unary and binary constraints only is called binary CSP,orbinary constraint problem [48, 32]. In general, most researchers are working with Constraint Satisfaction Problems focus on binaries CSP by the simplicity associated with this, compared with the no binary CSP, and also because all no binary CSP can be transformed into a binary CSP equivalent [4, 39]. Mainly there are two techniques to transfer no binary constraint to binary: dual encoding and hidden variable encoding [41].

On the other hand, the constraints are also distinguished by the structure of their relations. For example some constraint types are:

• Logical: x = TrueAND y = False

• Arithmetic: x ∗ y = 300

• Cardinality: Set S only has three elements

• Disjunctive: x =2OR x =0

2.4 CSP Solving

2.4.1 Basic Search Strategies for Solving CSPs

2.4.1.1 General Search Strategies

This techniques are focused on exploring the search space to solve the problem. Such techniques may be complete, which explores all the space in search of a solution, or incomplete if only one part of exploring space search. The tech- niques that explore the entire space, guarantee a solution, if it exists, or show that the problem has no solution. These strategies were developed for general applications, and do not make use of the constraints to improve their efficiency.

Generate and Test (GT) This algorithm, in a systematic manner generates all possible complete labelling [31, 6]. When it finishes generating all complete labelling, it if any of them is a solution, in other words, checks it any labellings satisfy all the constraints, where the first that satisfied all constraints will be the solution problem.

Chronological Backtracking (BT) This algorithm toured the tree using search first in-depth, and each new labelling check if partial labelling is locally consistent. If so, continuing with the labelling of a new variable. On the contrary, if it detects inconsistency, tries to assign a new value to the last labelling variable, if possible, and otherwise it rolls back to the variable allocated immediately preceding [31, 6].

10 1 PROCEDURE Backtracking (k, V[n]) 2 BEGIN

3 V[n] = Selection(dk); 4 if Check (k, V[n]) then 5 if k=nthen 6 Return V[n]; 7 else 8 Backtracking (k+1, V[n]); 9 end 10 else 11 if values (dk) then 12 Backtracking (k, V[n]); 13 else 14 if k=1then 15 Return 0; 16 else 17 Backtracking (k-1, V[n]); 18 end 19 end 20 end Algorithm 1: Chronological Backtracking

2.4.1.2 Hybrid Techniques

The search techniques and some consistency techniques can be used independently to completely solve a constraint satisfaction problem, but this rarely happens, so a combination of both approaches is the most common way to solve a CSP. By including consistency techniques in the search process, they obtain resolution hybrid techniques where the consistency will allow narrowing the search space, thus reducing the cost of that process.

Look-Backward Algorithms These algorithms try to exploit the information to the problem of how to have better behaviour in those situations with- out exit. The Look-Backward algorithms do consistency checking backwards, this is, between the variable currently labelling and variables past labelling. Here are some of the variants of Look-Backward algorithms [5]:

• Backjumping (BJ): rather than back to the variable previously labeled, as does the Chronological Backtracking,

BJ jumps to the variable xj that is closest to the current variable xi, where j

• Conflict-directed Backjumping (CBJ): in this algorithm, each variable xi has a conflict set formed by past vari-

ables that are in conflict with xi. In this way, to see that consistency between the current variable xi and passed

variable xj fails, xj adds to the conflict set. In situations without output, CBJ jumps to the variable deeper in his

conflict set, for example xk with k

xk in order to avoid losing e information [37, 5].

11 Look-Ahead Algorithms These algorithms perform consistency checking forward in each instance, integrating an inferential process during the search process itself. This is called propagation, which allows: (i) narrow constraints and domains of the future variables to label, limiting the search space, and (ii) finding inconsistencies before they appear. In short, trying to discover whether the current partial assignment may be extended to a global solution, otherwise producing a point of backtracking [48].

• Forward Checking (FC): at each step of the search it tests forward the labelling of current variable with all future values of the variables that are restricted to the current variable. Those values inconsistent with the current label of the domains are removed; if after this elimination of domain of future variable is empty, the labelling of current variable melts and is tested with a new value. If no value is consistent there is Chronological Backtracking [37, 5, 2].

• Minimal Forward Checking (MFC): instead of checking forward the current labelling against all values of the future variables, MFC only checks with the values of the future variables until it finds a value that is consistent. Thus, if the algorithm goes back it only carries out checks with the remaining values not yet checked [5].

1 Select xi;

2 Labelling xi ← ai:ai ∈ Dxi

3 Forward Thinking (Forward checking):

4 Remove from the domains of variables (xi+1, ..., xn) not yet labelling, those values inconsistent regarding the label (xi,ai), according to a set of constraints.

5 If values are possible in the domains of all variables for instance, then:

6 i

7 i = n, quit the solution

8 If there is a variable for instance, without possible values in its domain, then undo the effects generated by the label xi ← ai. Do:

9 If there are still values to try in Dxi , go to step 2

10 If there are no values:

11 i>1, decrement i and go to step 2

12 i =1, quit unresolved Algorithm 2: Forward Checking Pseudocode

12 2.4.2 Consistency Techniques

The main difficulties often found in the search algorithms are the emergency of local inconcistencies. The local incon- sistencies are individual values (or combination of values) of variables that can not participate in the solution because they do not meet any property of consistency, which means they do not satisfy any constraints. The algorithms that re- move inconsistent values of the domains of variables, used in the phase constraint propagation, are called Consistency techniques.In literature there are different levels of local consistency [5, 7, 2, 6, 28].

2.4.2.1 Node Consistency

Forcing this level of consistency ensures that all values in the domain of a variable satisfy all unary constraints on that variable [31, 5, 6].

∈ ∀ ∈ : ∀xi X, a Dxi asatisfaceCxi

Example: Consider the CSP comprising all variables X = x1,x2, their domains Dx1 = {1, 2, 3, 4, 5}, Dx2 =

{1, 2, 3, 4, 5} respectively, and all constraints C = x1 ≤ 3,x2 ≥ 1,x1 = x2

x1 ≤ 3 x2 ≥ 1

x1 = x2 x1 x2

x1 ∈{1, 2, 3, 4, 5} x2 ∈{1, 2, 3, 4, 5}

Figure 2.1: Example: No Node Consistency

It is possible to see in Figure 2.1 that CSP raised is not consistent, because the node x1 does not satisfy the node consistency, this is, x1 have values in their domain that do not meet the unary constraint x1 ≤ 3. Thus, to that previous

CSP is node consistent, just enough to eliminate the values of the domain of variable x1 that do not meet the unary constraint (4 y 5), leaving the graph of the previous figure as follows:

13 x1 ≤ 3 x2 ≥ 1

x1 = x2 x1 x2

x1 ∈{1, 2, 3} x2 ∈{1, 2, 3, 4, 5}

Figure 2.2: Example: Node Consistency

2.4.2.2 Arc Consistency

It is said that a CSP is arc consistent if all binary constraint is arc consistent [2]. A binary constraint C on variables x1 y x2, whose domains are Dx1 and Dx2 respectively, is arc consistent if:

∀ ∈ ∃ ∈ :( ) a Dx1 b Dx2 a, b satisfies C ∀ ∈ ∃ ∈ :( ) b Dx2 a Dx1 a, b satisfies C

Thus, a binary constraint is arc consistent if each value in each domain has a support in another domain, where b is called a support to a if the pair (a, b) satisfies the constraint [2].

One particular case of the arc consistency is the directed arc consistency [2, 5], where given a linear order → on the variables considered, it requires the existence of a support in one direction only, namely:

Given the conditions:

∃ ∈ :( ) → ∀a ∈ Dx1 b Dx2 a, b satisfies C, given the order x1 x2 ∀ ∈ ∃ ∈ :( ) → b Dx2 a Dx1 a, b satisfies C, given the order x2 x1

Only one of them needs to be checked.

14 = { } = {5 6 7 8 9 10} = Example: The CSP shown in Figure 2.3, which is composed of X x1,x2 , Dx1 , , , , , , Dx2

{3, 4, 5, 6, 7, 8} y C = {x1

x1 ∈{5, 6, 7, 8, 9, 10} x2 ∈{3, 4, 5, 6, 7, 8}

x1

Figure 2.3: Example: No Arc Consistent CSP

For the CSP in Figure 2.3 is directional arc consistent under the order x2 → x1, the domains must be adjusted so that each element in the domain of x2 have a support in Dx1 , this adjustment is shown in Figure 2.4, where it is possible to see that the CSP is not yet arc consistent. To change the latter, and achieve tha arc consistency of domains x1 and x2 should be reduced as shown in Figure 2.5.

x1 ∈{5, 6, 7, 8, 9, 10} x2 ∈{6, 7, 8}

x1

Figure 2.4: Directional Arc Consistency under the order x2 → x1

x1 ∈{5, 6, 7} x2 ∈{6, 7, 8}

x1

Figure 2.5: Arc Consistency CSP

15 2.4.2.3 Path Consistency

It requires that for each pair of values a and b of two variables xi and xj respectively, the labels (xi,a) and (xj,b) satisfy the constraint directly between xi and xj, and there is a value for each variable along the road between them so that all constraints along the road are met [31, 5]. If each path of long 2 in a graph of restrictions complies with the path consistency, then the graph of restrictions is path consistent globally [35].

x1 ∈{4, 5} x2 ∈{3, 4}

x1 >x2 x1 x2

x1 >x3 x2 >x3

x3

x3 ∈{2, 3}

Figure 2.6: No Path Consistency CSP

= { } = {4 5} = {3 4} Example: The CSP shown in Figure 2.6, which is composed of X x1,x2,x3 , Dx1 , , Dx2 , , = {2 3} = { } Dx3 , and C x1 >x2,x2 >x3,x1,x3 , is not path consistent because if it takes the value 4 in the domain of x1, and it’s value is 3 in the domain of x3 there is no value in the domain of x2 such as to satisfy the constraint x2

16 x1 ∈{4, 5} x2 ∈{3}

x1 >x2 x1 x2

x1 >x3 x2 >x3

x3

x3 ∈{2}

Figure 2.7: Path Consistency CSP

2.5 Constraint Propagation and Enumeration

The most usual method for the resolution of a CSP is to use hybrid techniques consisting of alternating phases constraint propagation and enumeration, propagation prunes the search tree by eliminating values that can not participate in a solution. Enumeration consists of dividing the original CSP in two smaller CSPs, creating one branch by instantiating a variable (x = v) and another branch (x = v) for backtracking when the first branch does not contain any solution. In the enumeration phase the order which will consider the variables and the order in which these will instance the values of the domains of each of the variables should be established [5]. To establish such orders we use variable selection heuristics and value selection heuristic, which together constitute the so-called Enumeration Strategies. In literature it has been established that selecting a correct order of the variables and values can significantly improve the efficiency of resolution [5, 9, 45, 3], that is why there are various efforts to define this type of heuristics, some of which are described in this section.

17 2.5.1 Variable Selection Heuristics

The main idea behind this is the choice of a variable to minimize the size of the search tree (search space explored) and ensure that those branches that have no solution will be pruned as soon as possible, this is called by Haralick and Elliot [25] as the fail-first principle, which is described as: To succeed, try first where you are most likely to fail [44, 2].

The selection of variables can be static or dynamic, where the terms static and dynamic refer to the time when it is stable in order to be considered the variables for analysis, which coincides with the definition used in [5] and [44], and which differs from the concept used in [15] and [34], where the idea of dynamism is based on the adaptive constraint satisfaction [13].

2.5.1.1 Static Selection Heuristics

It generates a fixed order of the variables before initiating the search. Here the variables are always selected in the order predefined for instantiation. Only exploit the information of the state’s initial search.

• Minimum Width (MW) [5]: The selection is done according to the management of smaller width of the linear graph of restrictions.

• Maximum Degree (MD) [17, 5]: Known in [9] Max-Static-Degree, this heuristic chooses the variables degres- sively acording to their degree in the original constraint graph. Understanding by a variable degree, the number of variables with which it’s connected.

• Minimum Domain Variable (MDV): This heuristic selects variables according to the lower cardinality of its do- main.

• Min Domain\Degree [17]: Chooses the variable that minimizes the proportion between the domain size of a variable and degree of the variable.

18 2.5.1.2 Dynamic Selection Heuristics

It can change the instantiation order of the variables dynamically as one advances in the tree search. It is based on information generated during the search.

Golomb and Baumert [22] were the first to propose a dynamic ordering heuristic based on choosing the variable with the smallest number of values remaining in its domain.

• Minimum Remaining Values (MRV) [5]: At each step, it selects the variable with domains of smaller instantia- tion.

• Maximum Cardinality (MC) [5]: This heuristic, known as Max Backward Degree in [17], is to select the first variable arbitrarily and then select the variable that is related with the largest number of variables instanced.

• Maximum Forward Degree [17, 9]: The selected variable is one that maximizes the number of variables adjacent not instanced.

• Domdeg [11, 9]: This is equivalent to heuristics Min Domain\Forward Degree, this means that you select a variable that minimizes the proportion between the domain size and forward degree, the latter corresponding to the number of variables adjacent not instanced.

• Weighted Degree Heuristics [53, 14]: The weighted degree heuristic is designed to enhance variable selection by incorporating knowledge gained during search, in particular knowledge derived from failure. In this procedure, a constraint’s weight is incremented during arc consistency propagation whenever this causes a domain wipe- out. This information is used during variable selection by calculating the sum of the weights of the constraints associated with a variable and choosing the variable with the largest sum. This constraint-weight sum is referred to as a weighted degree and the heuristic for selecting a variable can therefore be called the weighted degree heuristic. In practice, only constraints associated with uninstantiated, or future, variables are used to calculate the constraint-weight sum.

19 In general, this heuristic selects the variable with the greatest Weighted Degree. The procedure to calculating the

Weighted Degree of a variable xi and implement suitable heuristics falls within the following:

1. Each restriction associated with a variable is assigned an accountant, called weight.

2. The counter weight is increased each time to execute a phase of propagation obtained an empty domain vacuum (See Algorithm 3).

3. The Weighted Degree of the variable xi corresponds to the sum of weight each of the constraints under which the variable is involved.

4. The variables are descending order according to their Weighted Degree.

PROCEDURE revise(C : Constraint,X:Variable) : boolean: BEGIN :

1 For each a ∈ dom(X) 2 if seekSupport(C,X,a) = false then 3 remove a from dom(X) 4 end 5 End For 6 if dom(X)=Øthen 7 weight[C]++ 8 end 9 Return Dom(X) =Ø Algorithm 3: Algorithm to calculate weight associated to a constraint

2.5.2 Value Selection Heuristics

In the choice of a value, it can choose, if possible, a value that is more likely to lead to a solution, and thus reduce the risk of having to make backtracking and test a value alternative. In practice, of course, the best thing to do is choose the value that is less likely to lead to failure. This principle, called succeed-first, doesn’t have a value selection heuristic widely applicable compared to heuristics MRV, relevant to fail-first, but can give good heuristics applicable to individual problems, or types of problems [44]. In short, this principle states that the value with a high number of support is preferable [46].

• Min-Conflicts [40, 48]: This heuristic orders values according to the conflicts in which they are involved with the future variables. The process consists of associating each value a of the current variable, the total number of values in the domains of future adjacent variables that are incompatible with a, this will select the value associ- ated with the lowest amount.

20 • Survival [27]: Variation in the heuristic Min-Conflicts, the number of incompatible values in the domain, is divided by the size of this domain. This results in the percentage of values useful lost by the domain. Add the percentages for all future variables, related to the current variable. The value with the smallest sum is chosen.

• Max Domain Size [18]: This heuristic selects the value of the current variable that creates the maximum domain size in the future variables.

• Weighted Max Doimain Size [18]: This heuristic specifies break ties to Max Domain Size considering the size

of the domains. For example, if it chooses a value ai leaves 3 variables with domain size 4, but value aj leaves

4 variables with domain size 4; in this case aj is selected.

• Point Domain Size [18, 44]: This heuristic assigns a weight to each value of the current variable on depending on the number of future variables that were certain domain sizes. This weight is known as point of value. For example, it takes a value, and for each domain that is size 1, 8 points of value are assigned. For each domain that generates size 2, 4 points of value are assigned to the variable. For each domain of size 3 generated, 2 points value are assigned to the variable, and for each domain of size 4, 1 point value is assigned to the variable. It is tested with all the values for the current variable. Finally, the value with the fewest points of value or weight is chosen.

• Promise [19]: Each value ai of current variable xi, counts the number of values compatible with ai in each future adjacent variable and takes the product of the quantities counted. This product is known as the promise of value. The heuristics select the value with the highest promise.

In Tables 2.1 and 2.2 there is the rationale behind each of the variable and value selection heuristics described, such justification was based on the theoretical principles that support it, either fail-first to selection of variable or succeed-first to selection value, and for the particular case of Weighted Degree Heuristic it has also been considered the Contention Principle [53], as the heuristic to combine the principle with the principle fail-first. Contention Principle states that the variables directly related to a failure (empty domains) will have a greater probability of causing a failure if these are elected rather than others.

21 Table 2.1: Justification Variable Selection Heuristics Variable Selection Heuristics Justification Minimum Width Reduce the Number of Backtrack (Fail-First) Maximum Degree Reduce the Number of Backtrack (Fail-First) Minimum Domain Variable Find Inconsistencies as Soon as Possible (Fail-First) Min Domain\Degree Find Inconsistencies as Soon as Possible (Fail-First) Minimum Remaining Values Find Inconsistencies as Soon as Possible (Fail-First) Maximum Cardinality Reduce the Number of Backtrack (Fail-First) Maximum Forward Degree Finding inconsistencies with future varibles as soon as possible (Fail-First) Domdeg Find Inconsistencies as Soon as Possible (Fail-First) Weighted Degree Select before the variables directly related to a failure (Con- tention)

Table 2.2: Justification for Value Selection Heuristics Value Selection Heuristics Justification Min-Conflicts Select Value less Constrained (Succeed-First) Max Domain Size Preserve Maximum Sizes Domains (Succeed-First) Wieghted Max Doimain Size Preserve Maximum Sizes Domains (Succeed-First) Point Domain Size Preserve Maximum Sizes Domains (Succeed-First) Promise Select Value less Constrained (Succeed-First) Survival Select Value less Constrained (Succeed-First)

22 Chapter 3

Adaptive

3.1 Elemental Definitions

Constraint Programming is a successful technique to CSP solving, but requires skill in modelling problems, and knowl- edge about how to the algorithms interact with the model. The solving CSP has at its disposal many heuristics that can improve the efficiency of the search, however the efficiency of each heuristics varies with the problem type that we want to solve. This way, we can see that a good algorithm for a problem type can be very poor for another, even within the same problem type the performance may vary greatly from one instance to another.

In this context, efforts have focused on designing robust algorithms that work well for a wide range of problems, models and instances, this gives rise to algorithms able to use the results of their search experience to modify their behaviour, the latter known as Adaptive Search [36]. Cases of the latter can include:

1. Training Aproach consists in training a system for the resolution of a particular family of problems. In its form, the system achieves what a practitioner would do manually, that is to configure a solver for a particular context [26].

2. Adaptive Aproach concerns the development of the systems that dinamically react and adjust during the reso- lution of a particular instance of a problem.

• Learning from Failure, when a constraint is violated during the descent of the tree, the conditions of that failure are analysed with the view of making the most of this knowledge throughout the remainder of the search. For example, the techniques nogood recording and clause learning seek to avoid redoing combinations of variable/value affectations that are mutually inconsistent.

23 • Reactive Systems are those that maintain an ongoing interaction with their environment at a speed dictated by the latter [43]. Associated with this, and in the context of combinatorial optimization it can be mentioned that the Reactive Search advocates the integration of sub-symbolic machine learning techniques into local search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters [8].

• Autonomous Search (AS) Systems [24] has the ability to modify its internal components when exposed to changing external forces and opportunities. Internal components are various algorithms involved in the search process, while external forces are information collected during the search process.

Taken as an example from the side of the Evolutionary Algorithms, we can see that setting the values of various parameters within the processes of adaptive search is a crucial task to obtain good performance on the part of the solving process, in literature there are two main forms of setting parameter values: parameter tuning and parameter control (see Figure 3.1) [1]. By parameter tuning we mean the commonly practised approach that amounts to finding good values for the parameters before the run of the algorithm and then running the algorithm using these values, which remain fixed during the run. On the other hand, parameter control forms an alternative, as it amounts to starting a run with initial parameter values that are changed during the run.

Parameter Setting Before de run During the run

Parameter Parameter Tuning Control

Deterministic Adaptive Self Adaptive

Figure 3.1: Taxonomy of parameter control [1]

24 Within parameter control techniques of an evolutionary algorithm, many aspects can be taken into account, of which the most relevant points in an adaptive search are the following:

1. What is changed?, it is necessary to identify all components or parameters changed, building a list of all the most important and highest effects of solving process, which is a difficult task in itself.

2. How is the change made?, according to the work done at Evolutionary Algorithms, there are three ways to make a change:

• Deterministic, this takes place when the value of a strategy parameter is altered by some deterministic rule. This rule modifies the strategy parameter in a fixed, predetermined (i.e., user-specified) way without using any feedback from the search.

• Adaptive, this takes place when there is some form of feedback from the search that serves as inputs to a mechanism used to determine the direction or magnitude of the change.

• Self-adaptive, here the parameters to be adapted are encoded into the chromosomes and undergo mutation and recombination. The better values of these encoded parameters lead to better individuals, which in turn are more likely to survive and produce offspring and hence propagate these better parameter values.

3.2 Adaptive Model

It has been somehow already mentioned that the efforts of this work are geared to find solutions quickly to different types of problems and solve one of the limitations on Enumeration Strategies, which is that for a given problem has a particular strategy that works very well, but of limited use in the efficient resolution of other problems, leaving it more generally limited to a single combination problem-solver that solves it efficiently.

What has been described previously was inspired by a series of adaptive approaches in literature, but which have been designed with a different orientation, and some of which are described in the following section.

25 3.2.1 Adaptive Constraint Satisfaction

This approach [13] implies that given a sequence of algorithms to use, the bad algorithms are detected and dynamically replaced by the next candidate. Note that in this case it refers to an algorithm changed completely, unlike the proposal made here, where for the same algorithm resolution intends to make changes in strategies that guide.

Adaptive Constraint Satisfaction [13], emerges as an alternative solution to the constraint of having to choose and set at the beginning of the solving process an algorithm to resolve the CSP. Thus, selecting an algorithm for a specific problem becomes a problem particularly in the domain of the satisfaction of restrictions, given that this choice of an algorithm is quite extensive and all of them quite useful. The emerging question is how to choose the most appropriate algorithm for solving these problems [13, 49]. To answer the question posed, the Adaptive Constraint Satisfaction proposed a technique for determining the behaviour of a particular algorithm based on information generated during the resolution of the problem, and use this information to take corrective action in the event that an election (decision) initially incorrect has been executed.

The approach proposed by Adaptive Constraint Satisfaction has a structure as shown in Figure 3.2, where there are some key elements such as algorithms, strategy and monitors. The first element corresponds to a list of available algorithms for solving a CSP, the second element called monitors, corresponds to the source of information obtained from the resolution process, which reflects the progress of the search. The third and final element corresponds to the strategy, which is responsible to say when and how it would change the current algorithm used.

Strategies CSP

Algorithms Monitors A P B Q C R ... S

Figure 3.2: Adaptive Constraint Satisfaction [13]

26 3.2.2 Adaptive Enumeration Strategies

In [34] the adaptation consists in use information about resolution process. During the search it collects information about the state of progress, if no advancement adjustments must be done, changing the Enumeration Strategies.

Information about the state of progress is captured through snapshots and indicators. Snapshots are observations about the current search tree and the indicators are the evidence of the resolution. Examples of snapshots are: the maximum depth reached in the search tree, the depth of the current node, the size of the current search space. An example of an indicator is: variation of the maximum depth.

The framework is shown in Figure 3.3, where SOLVE component runs a generic CSP solving algorithm performing a depth-first search by alternating constraint propagation with enumeration phases. The OBSERVATION component aims at observing and recording some information about the current search tree, and taking snapshots. The ANALYSIS component analyses the snapshots to evaluate the different strategies and provide indicators. The UPDATE component makes decisions using the indicators. It interprets the indicators, and then updates the enumeration strategies priorities and requests some metabacktracks in the SOLVE component.

Data OBSERVATION base of (snapshots) snapshots

SOLVE (enumer- ation strategies ANALYSIS with priorities meta evaluations backtrack)

Data UPDATE base of (decisions) indicators

Figure 3.3: The Dynamic Strategy Framework [34]

27 3.2.3 Adaptive Constraint Engine

In [17] ACE (Adaptive Constraint Engine) is described, where adaptation is achieved mixing learning with heuristic, through a learning architecture based on use of multiple heuristics. All this is supported by a series of procedures called advisors, where each represents a general principle that supports expert behaviour.

Ace is equipped with Variable Selection Heuristics such as: maximum domain size, minimum domain size, maxi- mum degree and minimum degree. These heuristics are involved in procedures called Advisors that collaborate with the search. For example, one Advisor, might recommend choose the variable with maximum domain size while another recommends choose the variable with minimum domain size. The heart of ACE is FORR (For the Right Reasons), wich is a problem-solving and learning architecture for the development of expertise from multiple heuristics. ACE also learns each time it solves a problem best ways to confront it.

FORR is equipped with a variety of weight-learning algorithms, permits the user to partition each task into stages, so that a weight-learning algorithm can learn weights for each stage in the solution process.

ACE works with advisors who possess a hierarchy of levels, see Figure 3.4. At level one, one of the advisors recommends an action. If an Action is not identified, control passes to level 2, which recommended a plan that contains several actions. If a decision is not identified, control passes to level 3. In FORR, all level Advisors are heuristic and consulted in parallel. A decision is reached by combining their comments in a process called voting, if no decision is taken select one at random.

28 Figure 3.4: Organization and Management Advisers to make a decision [17]

29 Chapter 4

Adaptive Approach Based on Enumeration Strategies

4.1 Scheme Proposed

As mentioned throughout this work, the development raises the possibility of having an adaptive solving process based on enumeration strategies, which means that the process must be able to determine based on information generated during the resolution of a CSP, if the current enumeration strategy used is having a good performance; if it isn’t, it must be able to detect that situation and change the strategy by any other available implying an improvement in performance of the solving process. To detect the performance of the strategy observations are made on the continuing solving process, where information is obtained from the relevant state of the resolution (degree of progress), and which can make more informed decisions regarding whether or not to change the strategy of listing.

The adapting process intended to be implemented in this work is behind the general framework of the technical resolution to use, which basically consists of alternating phases of spreading restrictions and enumeration. Figure 4.1 provides a schematic approach of the proposal, where one can observe the main components that will allow working with an adaptive solving process based on enumeration strategies.

30 Measure

Solver Observation Measured Values Select Strategies Indicators

Library Analysis Strategies Update Priorities

Figure 4.1: Scheme of Adaptive Solving Process

In the scheme, the components Solver, Observation and Analysis are processes whose performance in one way or another affect the resolution process. On the other hand, the component called Library Strategies contains all those enumeration strategies feasible to use in the solving process.

Arrows continued to interconnect with each component of the scheme represent information flow, discontinuous arrows have been explicitly placed as a way to reflect that after a specific action, there is a newly generated information flow, this is only for appropriate cases.

4.1.1 Solver:

The main task of the component is to run a resolution algorithm of CSP, which basically performs an in-depth search first by the alternation of the phases of constraint propagation and enumeration. Specifically the algorithm to use corresponds to the so-called Forward Checking. The election was conducted mainly taking into account the fact that this algorithm first of all meets the basic feature of the overarching framework resolution, which means alternating phases of constraint propagation and enumeration, and moreover has been taken into account the fact that this algorithm is to conduct an audit of forward consistency, to find inconsistencies (situations without output) rather than those that appear and prune the search space, which is quite favourable to reduce the cost of the search.

31 For the effective use of selected algorithm, the Solver component must select enumeration strategies to use from the Library of strategies, taking into account the priorities for this associated with each. By default it also assumes that Solver is responsible for shaping the problems to be resolved before initiating the resolution of them. Regarding the latter, to resolve the problems initially limited to CSP, dismissing problems Optimization (CSOP), which aims to extend the proposal made after the necessary adjustments to the adaptive process, and because basically literature raises the feasibility of extending or spending any CSP a CSOP using a simple process [48, 50, 54].

4.1.2 Library Strategies

This library will store all feasible enumeration strategies used by the resolution algorithm selected, and which may be constituted by the heuristic selection of variable and value described in previous sections. At this point it is necessary to clarify that while it is not appropriate to dismiss some of the strategies arbitrarily, its use or elimination will be evaluated under criteria of state.

4.1.3 Observation

The component notes the solving process and measures information about the current status of the search. The infor- mation obtained from these observations are usually called indicators, to decide whether or not to change the strategy and which of them make the change. In the literature there are various indicators, some of which are detailed below and from where they select those that seem more appropiate to implement according to feasibility of implementing a tool to use, utility associated with the strategies implemented in the Library Strategies, etc.

4.1.4 Analysis

Take the information from the components of Observation, and then analyse based on this information updating the priorities associated with each of the enumeration strategies.

32 4.2 Enumeration Strategies

Previous chapters have explained in detail the CSP solving process using Constraint Propagation and Enumeration, where the enumeration phase is guided by enumeration strategies, which are comprised of variable selection heuristics and value selection heuristics. The present section provides a description of the heuristics used in the implementation of this project.

The labeling corresponds to the form used to conduct the enumeration, here it’s called Variable Selection Heuristics that receives the set variables and delivers the variable selected, and the Value Selection Heuristics, which receives the variable with their respective domain and returns a value to assign. Below is the general code of labeling: labeling(criterion, AllVars, BT) :- (fromto(AllVars, Vars, Rest, []) do Variable Selection Heuristics(Vars, Var, Rest), Value Selection Heuristics(Var, Val), Var = Val ).

In the code above, the strategy to use is given by this criterion. The labeling allows exchange heuris- tics as needed, just enough to replace the heuristics you want, which allows us to generate several enumeration strate- gies. For the development of this project we have identified and implemented several enumeration strategies, which are listed below:

1. Strategy 1: the variable with smaller domain size is selected. If there are several that have the same size domain, the variable with the largest number of constraints asociated is selected. In the case of values, the heuristic to select the smaller value of the domain.

choose_var_MC(Vars, Var, Rest) choose_val(Var, Val)

2. Strategy 2: the variable with largest domain size is selected and the smaller value of the domain.

choose_var_AMRV(Vars, Var, Rest) choose_val(Var, Val)

33 3. Strategy 3: select the variable with smaller domain size and start selecting the smaller value.

choose_var_MRV(Vars, Var, Rest) choose_val(Var, Val)

4. Strategy 4: select the first variable of the list Vars and start selecting the smaller value of the domain.

choose_var(Vars, Var, Rest) choose_val(Var, Val)

4.3 Indicators

The proposed adaptive model must possess the ability to change of a strategy to another according to the effect that these strategies have in the resolution process, ie when changing the strategy is having a bad performance and it is assumed or believed that another might work better.

To measure the effect of a strategy in the resolution process and make the decision to change or not, must be a process supported by the observation of information generated during the resolution process.

The Information obtained during this process or a combination thereof will lead to different indicators, many of which are widely used in the literature and some of which are presented below:

1. Indicators for Resolution Process Cost

Indicators to compare the relative performance of different algorithms or resolution tecniques. While the com- parison with other techniques may be done once the resolution of the problem has ended, some of these indicators could be used in intermediate stages of the process to assess the performance of resolution process at a specific time, sample this may be the Number of backtracks.

• Runtime/CPU time [34, 13]: Measure the time required in solution process of a problem.

• Number of nodes [12]: Count the number of nodes visited.

34 • Number of backtracks [12, 13, 34, 47, 45]: Counts the number of times the resolution process goes back

from a variable xik to its predecessor xik−1 after having proved that none of the extensions of I(xik ) can be extended to a solution. In terms of the search tree, it counts the number of times the search goes up in the tree from a node u to its predecessor after having exhausted the last child of u. The condition for a node u to be counted as a backtrack node is that it has some children which have been visited, and that none of them were successful.

2. Indicators Progress Resolution

These indicators provide information on progress in the resolution a problem at some point in the process, this is done through the analysis of certain characteristics of the search space, size of the domains and the relationship of these to other variables within the model.

• Reduction of the search space [34]: Corresponds to the difference between the size of the search space of

a previous state and current state. This is Sp - Sc, where Sc is the size of current search space and Sp is the size of search space of some previous state. If this difference is positive, the current search space is the smallest.

• Impact of an assignment [38]: Measures the importance of an assignment in the reduction of the search space. Considering that the number of the all possible combinations of values for the variables is an |× ×| | estimation of the search space size (P = |Dx1 ... Dxn ), then if we this product before (Pbefore)

and after (Pafter) an assignment xi = a, we have an estimation of the importance of this assignment for reducing the search space. This reduction is called the impact of the assignment and is calculate as follows:

Pafter I(xi = a)=1− Pbefore . The higher the impact, the greater the search space reduction. From this definition, an assignment that fails has an of 1.

• Percentage of variables Instance [34]: Shown porcentage of variables instanced on the total number of N ◦VarInstan. variables. By means of formulas, this metric would be expressed as follows: N ◦TotalVar ∗ 100

3. Other Indicators

• Degree [17]: Number of neighbours in the constraint graph (static).

• Domain Size [34]: Initial size of the domains of variables.

• Remaining Values: Size of the domains of the variables at each step of the solving process.

35 • Backward Degree [17]: Number of valued neighbours.

• Forward Degree [17]: Number of unvalued neighbours.

• Domain\Degree [17]: Ratio of domain size to degree.

• Domain\Forward Degree: Ratio of size domain to number of unvalued neighbours.

• Common Value [17]: Number of variables that already assigned this value.

• Options Value [17]: Number of constraints on selected variable that include this value.

• Product Domain Value [17]: Product of the domain sizes of the neighbours.

• Conflicts Value [18]: Resulting domain size of neighbours.

• Weighted Domain Size Value [18]: Domain size of neighbours, breaking ties with frequency.

• Point Domain Size Value [18]: Weighted function of the domain size of the neighbors.

• Weighted Degree [53]: corresponds to the sum of the weights of each of the constraints under which the variable is involved, where the weights of constraints increase by 1 every time to run a phase of propagation obtaining a domain empty.

36 4.4 Problems

This section presents a description of the problems to solve and its implementation in the development system ECLiPSe.

4.4.1 N-Queens

4.4.1.1 Model

The problem is basically to locate N queens on a chessboard of dimensions NxN, where N>3, so as not to attack.

In the model, the queens are numbered from 1 to N, such that the k-esima´ queen is always located in the k-esima´ column. For each queen i there is a variable xi that indicates the row in which the queen is located. The model de- scribed so far guarantees that two queens will never be located in the same column.

To ensure that two queens will never be located in the same row, one must impose the constraint that the variables x1...xn are all different:

xi = xj ∀i, j given that 1 ≤ i ≤ j ≤ N (4.1)

Moreover, it must ensure that two queens will never be located in the same diagonal, which should impose the following constraints:

xi − xj = i − j ∀i, j given that 1 ≤ i ≤ j ≤ N (4.2)

xi − xj = j − i ∀i, j given that 1 ≤ i ≤ j ≤ N (4.3)

4.4.1.2 Script

Below is the script associated with the n-queens problem, where Board :: 1..N defines the size of the board and noattack(Q1, Q2, Dist) is the ECLiPSe predicate to impose the constraints of model. For its part, labeling(d, Board, B) is running the enumeration phase, which applies the respective strategy (See Algo- rithm 4).

37 1 queens(N):- 2 length(Board, N), 3 Board :: 1..N, 4 ( fromto(Board, [Q1 |Cols], Cols, []) do 5 ( foreach(Q2, Cols), param(Q1), count(Dist,1, )do 6 noattack(Q1, Q2, Dist) 7 ) 8 ), 9 labeling(d, Board, B), 10 print squares(Board). 11 noattack(Q1,Q2,Dist) :- 12 Q2 #\= Q1, 13 Q2-Q1#\= Dist, 14 Q1-Q2#\= Dist. Algorithm 4: N-Queens Script

4.4.2 Magic Square

4.4.2.1 Model

This puzzle consists in finding for given N an NxN such that every cell of the matrix is a number between 1 and N 2, all the cells of the matrix must be different, and the sum of the rows, columns, and the two diagonals are all equal.

The mathematical model used in its representation defines variable xij that represents the value that each cell of the matrix can take, and a variable S for the sum of each row, column and diagonal. Then the CP model establishes the following constraint:

N j=1 xij = S ∀i ∈{1, ..., N} (4.4) N i=1 xij = S ∀j ∈{1, ..., N} (4.5)

N i=1 xii = S (4.6) N i=1 xiN−i = S (4.7)

The constraints (4.4) and (4.5) assure that the sum of each row and each column will be equal to S, and the constraints (4.6) and (4.7) assure that the sum of each diagonal will be equal to S.

38 4.4.2.2 Script

Below is the script associated with Magic Square, where Square[1..N,1..N] :: 1..NN defines the size of the Square matrix, and alldifferent(Vars) ensures that all cells in the matrix are distinct one from another. Its part, labeling(d, Vars, B) is running the enumeration phase, which applies the respective strategy.

1 magic(N) :- 2 NN is N*N, 3 Sum is N*(NN+1)//2, 4 dim(Square, [N,N]), 5 Square[1..N,1..N] :: 1..NN, 6 Rows is Square[1..N,1..N], 7 flatten(Rows, Vars), 8 alldifferent(Vars), 9 ( 10 for(I,1,N), 11 foreach(U,UpDiag), 12 foreach(D,DownDiag), 13 param(N,Square,Sum) 14 do 15 Sum #= sum(Square[I,1..N]), 16 Sum #= sum(Square[1..N,I]), 17 U is Square[I,I], 18 D is Square[I,N+1-I] 19 ), 20 Sum #= sum(UpDiag), 21 Sum #= sum(DownDiag), 22 Square[1,1] #

39 4.4.3 Latin Square

4.4.3.1 Model

A Latin Square puzzle of order N is defined as a matrix of NxN where all its elements are numbers between 1 and N with the property that each one of the N numbers appear exactly once in each row and exactly once in each column of the matrix.

The mathematical representation used to model the problem has a variable xij that represents the value of the cell (i, j) of the matrix. The CP model consists of the following constraints:

AllDifferent {xi1,xi2,xi3, ..., xiN } (4.8)

AllDifferent {x1j,x2j,x3j, ..., xNj} (4.9)

4.4.3.2 Script

Below is the script associated with the Latin Square problem, where Square[1..N,1..N] :: 1..NN de- fines the size of the Square matrix, R is Square[I,1..N] and alldifferent(R), ensures that all rows in the matrix are distinct one from another, and L is Square[1..N,I] y alldifferent(L) ensures that all columns in the matrix are distinct one from another. For its part, labeling(d, Vars, B) is running the enumeration phase, which applies the respective strategy.

1 latin(N) :- 2 dim(Square, [N,N]), 3 Square[1..N,1..N] :: 1..N, 4 Rows is Square[1..N,1..N], 5 flatten(Rows, Vars), 6 ( 7 for(I,1,N), 8 param(N,Square) 9 do 10 R is Square[I,1..N], 11 alldifferent(R), 12 L is Square[1..N,I], 13 alldifferent(L) 14 ), 15 labeling(d, Vars, B), 16 print square(Square). Algorithm 6: Latin Square Script

40 4.4.4 Sudoku

4.4.4.1 Model

Sudoku is a puzzle played in a matrix of 9x9 (standard sudoku) which, at the beginning, is partially full. This matrix is composed of submatrices of 3x3 denominated ”regions”. The task is to complete the empty cells so that each column, row and region contain numbers from 1 to 9 exactly once [29, 30].

The model used for the representation can be seen like a composition of the models used in the above puzzles, variable xij represents the value that each cell (i, j) can take (in this case, from 1 to 9). In order to restrict that each row and each column have values from 1 to 9 exactly once the following constraints are due to impose:

AllDifferent {xi1,xi2, ..., xi9} (4.10)

AllDifferent {x1j,x2j, ..., x9j} (4.11)

On the other hand, each cell in regions Skl with 0 ≤ k, l ≤ 2 must be different, which forces to include in the model the following constraint:

AllDifferent xij,xi(j+1),xi(j+2),x(i+1)j,x(i+1)(j+1),x(i+1)(j+2),x(i+2)j,x(i+2)(j+1),x(i+2)(j+2) (4.12) withi=k*3+1yj=l*3+1

4.4.4.2 Script

Below is the script associated with Sudoku. The Board[1..NN,1..NN] defines the size of the board, where N is number of squares in row and number of squares in column. To ensure that all elements in the rows are distinct one from another it uses the following code:

(for(I,1,NN), param(NN,Board) do alldifferent(Board[I,1..NN]) )

To ensure that all elements in the columns are distinct one from another it uses the following code:

(for(I,1,NN), param(NN,Board) % Constrain Columns do alldifferent(Board[1..NN,I]) )

41 For it’s part, labeling(Answer) is running the enumeration phase, which applies the respective strategy. Finally, the Sudoku Script is as shown below:

1 sudoku(N,Board) :- 2 NN is N*N, 3 dim(Board, [NN,NN]), 4 Board[1..NN,1..NN] :: 1..NN, 5 (for(I,1,NN), param(NN,Board) 6 do 7 alldifferent(Board[I,1..NN]) 8 ), 9 (for(I,1,NN), param(NN,Board) 10 do 11 alldifferent(Board[1..NN,I]) 12 ), 13 (for(I,1,NN,N), param(N,NN,Board) 14 do 15 (for(J,1,NN,N), param(I,N,Board) 16 do 17 Subgrid is Board[I..I+N-1,J..J+N-1], 18 flatten(Subgrid,Varlist), 19 alldifferent(Varlist) 20 ) 21 ), 22 Squares is Board[1..NN,1..NN], 23 flatten(Squares, Answer), 24 labeling(Answer), 25 print square(Board). Algorithm 7: Sudoku Script

42 Chapter 5

Experimental Results

5.1 Initial Considerations

To implement the script presented in previous sections we use ECLiPSe version 5.10, and the tests conducted were implemented in Intel(R) Core Duo with 1.86 GHz with a 1 GB of RAM. Each one of the problems mentioned in Chapter 4 are solved using the strategies described in section 4.2 of the same chapter. Such strategies have also been used to generate the adaptive process proposed in this paper.

5.2 Enumeration Strategies Alone vs Adaptive Process

Table 5.1 illustrates the resolution of some N-Queens problem instances obtained with the four strategies described in

Chapter 4, and our adaptive process AP5. That is, the adaptive process used as an indicator backtrack, and the number five is the threshold for backtrack, which will tell us whether or not we change the strategy.

Initial results have shown that the adaptive process doesn’t have the best performance, but it can improve the reso- lution reducing the number of bactracks. In general, it is visualized that the small instances don’t have high exchange enumerations strategies.

Table 5.2 illustrates the resolution of some Latin Square problem instances with four enumerations strategies, in this case, the results are the same that in N-Queens. Then, we can say that the adaptive process (AP5) is not equivalent to the best strategy, but AP5 has a good performance.

43 Table 5.1: N-Queens: Enumeration Strategies Alone vs Adaptive Process measuring Backtrack (B) 4-Queens 10-Queens 20-Queens 25-Queens 50-Queens 100-Queens (B) (B) (B) (B) (B) (B) S1 1 4 11 21 177 8 S2 1 12 2539 - - - S3 1 14 149 416 - 118 S4 1 6 10026 2014 - - AP5 1 16 14 51 1233 33

Table 5.2: Latin Square: Enumeration Strategies Alone vs Adaptive Process measuring Backtrack (B) 3 4 5 10 15 (B) (B) (B) (B) (B) S1 0 0 0 0 4 S2 0 0 9 - - S3 0 0 0 0 200 S4 0 0 0 70 - AP5 0 0 7 0 43

5.3 Tuning of the Bracktracking

Establishing the threshold of the backtrack to fixed number and unalterable is not a good alternative in adaptive pro- cesses, since we can not establish a priori the best value for this threshold. In fact, in Table 5.3 it is possible to appreciate that different thresholds can be obtained by varying performances (measured by the number of backtrack). The b and w in the adaptive process name in Table 5.3 represent the best run and worst run.

Table 5.3: Tuning of the bracktracking to adaptive process in N-Queens problems 4-Queens 10-Queens 20-Queens 25-Queens 50-Queens 100-Queens (B) (B) (B) (B) (B) (B) AP3b 1 9 17 65 319 14 AP3w 1 23 4335 134 1161 22 AP5b 1 16 14 51 1233 33 AP5w 1 21 22 152 1658 43 AP9b 1 4 31 43 359 31 AP9w 1 29 48 72 1097 -

In carrying out these tests we have also seen that the threshold is related to the size of the instance to solve, hence it is necessary to use mechanisms for adjusting the threshold according to the characteristics of the problem, and thus avoid giving a value arbitrarily. An example of this can be the tuning of parameters, where we use the word tuning for an adjustment of the different components of the algorithm before trying to solve an instance [33, 1].

44 Chapter 6

Conclusion and Future Work

The present study has clearly described the motivation of the research and has offered a theoretical description of adap- tive approaches that support it, and which are directly related to the proposal made here.

With the work done, has been possible to establish that it is a possible to improving the resolution processes using an adaptive approach, based on exchange enumeration strategies. While the results do not place this alternative as the best of all existing possibilities, the focus position is achieved at an intermediate level by extending the possibility of solving a wide spectrum of problems efficiently.

A key factor to achieve the adjustment referred to relates to the ability to measure information during the resolution process, to identify the status. While the information measure, called in this paper as an indicator, is varied and can reflect different characteristics such as cost process, effectiveness of propagation, among others, there is difficulty in establishing the point at which change strategy.

Set the time necessary to change the strategy, is related to fixing the threshold of such indicators. For this, is relevant to establishing mechanisms for automatic tuning of parameters, leaving open the possibility to implementing new assessments and new research associated with this item.

45 Bibliography

[1] A. Eiben and Z. Michalewicz and M. Schoenauer and J. Smith. Parameter Control in Evolutionary Algorithms. In Fernando G. Lobo, Claudio´ F. Lima, and Zbigniew Michalewicz, editors, Parameter Setting in Evolutionary Algorithms Studies in Computational Intelligence, volume 54 of Studies in Computational Intelligence. Springer Verlag.

[2] K. Apt. Principles of Constraint Programming. Cambridge University Press, 2003.

[3] F. Bacchus and P. van Run. Dynamic variable ordering in csps. In Proceedings of the First International Con- ference on Principles and Practice of Constraint Programming (CP-95), pages 258–275, London, UK, 1995. Springer-Verlag.

[4] F. Bacchus and P. van Run. On the conversion between non-binary and binary constraint satisfaction problems. In Proceedings of the 15th National Conference on Artificial Intelligence (AAAI-98) and of the 10th Conference on Innovative Applications of Artificial Intelligence (IAAI-98), pages 311–318, Menlo Park, 1998. AAAI Press.

[5] F. Barber and M. Salido. Introduccion´ a la programacion´ de restricciones. Inteligencia Artificial, Revista Iberoamericana de Inteligencia Artificial, 20:13–30, 2003.

[6] R. Bartak. Constraint programming: In pursuit of the holy grail. In Proceedings of the Week of Doctoral Students (WDS), pages 555–564, 1999.

[7] R. Bartak.´ On-line guide to constraint programming. 1998. http://kti.mff.cuni.cz/ bartak/constraints/.

[8] R. Battiti, M. Brunato, and F. Mascia. Reactive Search and Intelligent Optimization. Operations re- search/Computer Science Interfaces. Springer Verlag, 2008. in press.

[9] C. Beck, P. Prosser, and R. Wallace. Toward understanding variable ordering heuristics for constraint satisfaction problems. In Fourteenth Irish Artificial Intelligence and Cognitive Science Conference - AICS 2003, pages 11–16, 2003.

[10] C. Beck, P. Prosser, and R. Wallace. Variable ordering heuristics show promise. In Wallace [52], pages 711–715.

46 [11] C. Bessiere` and J.-C. Regin.´ Mac and combined heuristics: Two reasons to forsake fc (and cbj?) on hard problems. In Proceedings of the Second International Conference on Principles and Practice of Constraint Programming, volume 1118 of Lecture Notes in Computer Science, pages 61–75. Springer, 1996.

[12] C. Bessiere, B. Zanuttini, and C. Fernandez. Measuring search trees. In Proceedings ECAI-04 Workshop on Modelling and Solving Problems with Constraints, pages 31–40. IOS Press, 2004.

[13] J. Borrett, E. Tsang, and N. Walsh. Adaptive constraint satisfaction: The quickest first principle. In W. Wahlster, editor, ECAI, pages 160–164. John Wiley and Sons, Chichester, 1996.

[14] F. Boussemart, F. Hemery, C. Lecoutre, and L. Sais. Boosting systematic search by weighting constraints. In R. L. de Mantaras´ and L. Saitta, editors, ECAI, pages 146–150. IOS Press, 2004.

[15] C. Castro, E. Monfroy, C. Figueroa, and R. Meneses. An approach for dynamic split strategies in constraint solving. In A. F. Gelbukh, A. de Albornoz, and H. Terashima-Mar´ın, editors, MICAI, volume 3789 of Lecture Notes in Computer Science, pages 162–174. Springer, 2005.

[16] G. Dantzig. Linear Programming And Extensions. Princeton University Press, 1963.

[17] S. L. Epstein, E. C. Freuder, R. J. Wallace, A. Morozov, and B. Samuels. The adaptive constraint engine. In P. V. Hentenryck, editor, CP, volume 2470 of Lecture Notes in Computer Science, pages 525–542. Springer, 2002.

[18] D. Frost and R. Dechter. Look-ahead value ordering for constraint satisfaction problems. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), pages 572–578, Montreal, Canada, 1995.

[19] P. Geelen. Dual viewpoint heuristics for binary constraint satisfaction problems. In ECAI92, pages 31–35, 1992.

[20] I. P. Gent, E. MacIntyre, P. Prosser, B. M. Smith, and T. Walsh. An empirical study of dynamic variable ordering heuristics for the constraint satisfaction problem. In Proceedings of the Second International Conference on Principles and Practice of Constraint Programming, volume 1118 of Lecture Notes in Computer Science, pages 179–193. Springer, 1996.

[21] M. L. Ginsberg, M. Frank, M. P. Halpin, and M. C. Torrance. Search lessons learned from crossword puzzles. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 210–215, Boston, MA, 1990.

[22] S. Golomb and L. Baumert. Backtrack programming. J. ACM, 12(4):516–524, 1965.

[23] J. Gu. Efficient local search for very large-scale satisfiability problems. SIGART Bull., 3(1):8–12, 1992.

[24] Y. Hamadi, E. Monfroy, and F. Saubion. What is autonomous search? First Workshop on Autonomous Search In conjunction with CP 2007, Septiembre 2007.

47 [25] R. M. Haralick and G. L. Elliott. Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence, 14:263 – 313, 1980.

[26] F. Hutter, D. Babic, H. H. Hoos, and A. J. Hu. Boosting verification by automatic tuning of decision procedures. In FMCAD ’07: Proceedings of the Formal Methods in Computer Aided Design, pages 27–34, Washington, DC, USA, 2007. IEEE Computer Society.

[27] N. Keng and D. Yun. A planning scheduling methodology for the constrained resources problem. In IJCAI-89, pages 999–1003, 1989.

[28] V. Kumar. Algorithms for constraints satisfaction problems: A survey. The AI Magazine, by the AAAI, 13(1):32– 44, 1992.

[29] T. Lambert, E. Monfroy, and F. Saubion. Solving sudoku with local search : A generic framework. In Proceed- ings of The International Conference on Computational Science (ICCS 2006), volume 3991 of Lecture Notes in Computer Science, pages 641–648, Reading, UK, May 28-31 2006. Springer Verlag. A paraˆıtre.

[30] R. Lewis. Metaheuristics can solve sudoku puzzles. In Press: Journal of heuristics, 13, 2007.

[31] F. Manya and C. Gomes. Tecnicas´ de resolucion´ de problemas de satisfaccion´ de restricciones. Inteligencia Artificial, Revista Iberoamericana de IA, 7(19):169–180, 2003.

[32] F. Manya` and C. P. Gomes. Tecnicas´ de resolucion´ de problemas de satisfaccion´ de restricciones. Inteligencia Artificial, Revista Iberoamericana de Inteligencia Artificial, 19:169–180, 2003.

[33] J. Maturana and F. Saubion. Automated parameter control for evolutionary al- gorithms. First Workshop on Autonomous Search In conjunction with CP 2007 http://research.microsoft.com/constraint-reasoning/Workshops/Autonomous-CP07/ Papers/3.pdf, Septiembre 2007.

[34] E. Monfroy, C. Castro, and B. Crawford. Adaptive enumeration strategies and metabacktracks for constraint solving. In T. M. Yakhno and E. J. Neuhold, editors, ADVIS, volume 4243 of Lecture Notes in Computer Science, pages 354–363. Springer, 2006.

[35] U. Montanari. Networks of constraints: Fundamental properties and applications to picture processing. Inf. Sci., 7:95–132, 1974.

[36] S. Petrovic, S. L. Epstein, and R. J. Wallace. Learning a mixture of search heuristics, 2007.

[37] P. Prosser. Hybrid algorithms for the constraint satisfaction problem. Computational Intelligence, 9:268–299, 1993.

[38] P. Refalo. Impact-based search strategies for constraint programming. In Wallace [52], pages 557–571.

48 [39] F. Rossi, C. Petrie, and V. Dhar. On the equivalence of constraint satisfaction problems. In L. C. Aiello, editor, ECAI’90: Proceedings of the 9th European Conference on Artificial Intelligence, pages 550–556, Stockholm, 1990. Pitman.

[40] S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall Pearson Education, 2003.

[41] M. A. Salido. Tecnicas´ para el manejo de csps no binarios. Inteligencia Artificial, Revista Iberoamericana de Inteligencia Artificial, 20:95–110, 2003.

[42] C. Schulte. Programming Constraint Services, volume 2302 of Lecture Notes in Artificial Intelligence. Springer- Verlag, Berlin, Germany, 2002.

[43] S. A. Seshia. Autonomic reactive systems via online learning. In ICAC ’07: Proceedings of the Fourth Interna- tional Conference on Autonomic Computing, page 30, Washington, DC, USA, 2007. IEEE Computer Society.

[44] B. Smith. Succeed-first or Fail-first: A Case Study in Variable and Value Ordering. Technical Report 96.26, 1996.

[45] B. M. Smith and P. Sturdy. Value ordering for finding all solutions. In L. P. Kaelbling and A. Saffiotti, editors, IJCAI, pages 311–316. Professional Book Center, 2005.

[46] K. Stergiou. Representation and Reasoning With Non-Binary Constraints. PhD thesis, University of Strathclyde, January 2001.

[47] P. Sturdy. Learning good variable orderings. In F. Rossi, editor, CP, volume 2833 of Lecture Notes in Computer Science, page 997. Springer, 2003.

[48] E. Tsang. Foundations of Constraint Satisfaction. Academic Press, London, 1993.

[49] E. Tsang and A. Kwan. Mapping constraint satisfaction problems to algorithms and heuristics. Technical Report CSM-198, University of Essex, 1994.

[50] W. J. van Hoeve. Operations research techniques in constraint programming.

[51] P. Van Roy and S. Haridi. Concepts, Techniques, and Models of Computer Programming. MIT Press, Mar. 2004.

[52] M. Wallace, editor. Principles and Practice of Constraint Programming - CP 2004, 10th International Confer- ence, CP 2004, Toronto, Canada, September 27 - October 1, 2004, Proceedings, volume 3258 of Lecture Notes in Computer Science. Springer, 2004.

[53] R. Wallace and D. Grimes. Experimental studies of variable selection strategies based on constraint weights. In 14th RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion, 2007.

[54] P. Zoeteweij. Composing Constraint Solvers. Printed and bound by PrintPartners Ipskamp, Enschede, 2005.

49