Local Search and Restart Strategies for Satisfiability Solving in Fuzzy Logics

Tim Brys∗, Madalina M. Drugan∗, Peter A.N. Bosman†, Martine De Cock§ and Ann Nowe´∗ ∗Artificial Intelligence Lab, VUB Pleinlaan 2, 1050 Brussels, Belgium timbrys, mdrugan, anowe @vub.ac.be { } †Centrum Wiskunde & Informatica (CWI) P.O. Box 94079, 1090 GB Amsterdam, The Netherlands [email protected] §Dept. of Applied Math. and Comp. Sc. UGent Krijgslaan 281 (S9), 9000 Gent, Belgium [email protected]

Abstract—Satisfiability solving in fuzzy logics is a subject that as not suffering from the complexity issues associated with an- has not been researched much, certainly compared to satisfiability alytical solvers, as our results showed. The disadvantage of this in propositional logics. Yet, fuzzy logics are a powerful tool approach is that only given an optimization proven for modelling complex problems. Recently, we proposed an optimization approach to solving satisfiability in fuzzy logics and to converge to the global optimum can it be a complete solver, compared the standard Covariance Matrix Adaptation Evolution i.e. deciding both SAT and UNSAT . The optimization ∞ ∞ Strategy algorithm (CMA-ES) with an analytical solver on a set of algorithm used in [12] was the Covariance Matrix Adaptation benchmark problems. Especially on more finegrained problems Evolution Strategy (CMA-ES)[13], which is considered to did CMA-ES compare favourably to the analytical approach. be state-of-the-art in black-box optimization on a continuous In this paper, we evaluate two types of hillclimber in addition to CMA-ES, as well as restart strategies for these . domain. In this paper we explore some alternatives to this Our results show that a population-based hillclimber outperforms algorithm as the core for this SAT solver. Our findings are ∞ CMA-ES on the harder problem class. somewhat surprising in that CMA-ES is outperformed by a hillclimber on one problem class. I.INTRODUCTION The rest of the paper is structured as follows: in Section The problem of SAT in propositional logic [1] is well known II, we give a brief description of fuzzy logics and formally and is of interest to researchers from various domains [2], [3], define SAT as an optimization problem. Section III contains ∞ as many problems can be reformulated as a SAT problem and the description of a hillclimber algorithm (III-A), a population- subsequently solved by a state-of-the-art SAT solver. based version of that algorithm (III-B) and CMA-ES (III-C), as In fuzzy logics – logics with an infinite number of truth well as restart strategies for these algorithms. These algorithms degrees – the same principle of satisfiability exists, SAT , are then evaluated and compared on a set of benchmark ∞ and it is, like its classical counterpart, useful for solving problems in Section IV. a variety of problems. Indeed, many fuzzy reasoning tasks can be reduced to SAT , including reasoning about vague II.FUZZY LOGICAND SAT ∞ ∞ concepts in the context of the semantic web [4], fuzzy spatial In fuzzy logics [14], truth is expressed as a real number reasoning [5] and fuzzy answer set programming [6], which in taken from the unit interval [0, 1]. Essentially, there are an itself is an important framework for non-monotonic reasoning infinite number of truth degrees possible. A formula in a fuzzy over continuous domains (see e.g. [7], [8], [9]). logic is built from a set of variables , constants taken from In contrast to SAT, SAT has received much less attention V ∞ [0, 1] and n-ary connectives for n N. An interpretation is ∈ from the various research communities; there are only two a mapping : [0, 1] that maps every variable to a truth I V → types of analytical solvers to be found in the literature, one degree. We can extend this fuzzy interpretation to formulas I using mixed integer programming [10], and one using a con- as follows: straint satisfaction approach [11]. Both methods suffer from For each constant c in [0, 1], [c] = c. exponential complexity issues associated respectively with • I For each variable v in , [v] = (v). mixed integer programming itself and an iteratively refining • V I I Each n-ary connective f is interpreted by a function f : discretization. • [0, 1]n [0, 1]. Furthermore we define Recently, we proposed a new approach to solving SAT → ∞ [12] that models satisfiability as an optimization problem [f(α1, . . . , αn)] = f([α1] ,..., [αn] ) over a continuous domain, and has the advantage of being I I I independent of the underlying logic and its operators, as well for formulas α with 1 i n. i ≤ ≤ [0, 1]n [0, 1]. Furthermore we define 1 ! [0, 1]n [0, 1]. Furthermore we define ! 1 [f(↵1,...,↵n)] = f([↵1] ,...,[↵n] ) ) I I I i α

f ( [f(↵1,...,↵n)] = ([↵1] ,...,[↵n] ) I ) f I I I i α ( I

for formulas ↵i with 1 i n. f for formulas↵ with 1 i n. The connectives in fuzzy logicsi typically correspond to con- The connectives in fuzzy logics typically correspond to con- 0 l u 1 nectives from classical logic, such as conjunction, disjunction, [α ] n 0 l i I u 1 nectives from classical logic, such as conjunction, disjunction, [α ] implication[0 and, 1] negation,[0, 1] which. Furthermore are interpreted we define respectively i I implication! and negation, which are interpreted respectively 1 by a t-norm, a t-conorm,n an implicator and a negator. A Fig. 1. f (↵i) for a formula ↵i with lower bound l, upper bound u and by a[0 t-norm,, 1][f(↵1,..., a[0 t-conorm,,↵1]n)] = anf implicator([↵1] ,..., and[↵n a] negator.) A I Fig. 1. f (↵i) for a formula ↵i with lower bound l, upper bound u and . Furthermore we define ) triangular norm or t-norm T is anI increasing,I associativeI degree of satisfaction [↵Ii] .i 1 α

degree of satisfactionI( [↵ ] . triangular norm! or t-norm T is an increasing, associative I i 2 f and commutative [0, 1] [0, 1] 2mapping that satisfies the I forand formulas commutative↵i with[0, 1]1 i[0, 1]n.mapping that satisfies the ![f(↵1,...,↵n)] = f([↵1] ,...,[↵n] ) boundary condition T(1,x)=x for! all x in [0, 1]. Simi- ) Theboundary connectives condition inT fuzzy(1,x)= logicsx Ifor typically all x in correspond[0I , 1]. Simi- toI con- i α

The connectives in fuzzy logics typically correspond to con- ( formula. Given these n formulas ↵ , boundsI (u ,l ), and an larly, a triangular conorm or t-conorm S is an increasing, i f i i nectiveslarly, a from triangular classical conorm logic, or such t-conorm as conjunction,S is an increasing, disjunction,formula. Given0 these1 nl formulas ↵i, boundsu (ui,li), and1 an nectives fromfor formulasclassical logic,↵i2 with such as1 conjunction,2 i n. disjunction,interpretation , we define the objective function[α ] f as follows: associative and commutative [0, 1] [0[0,,1] mapping[0, 1] that interpretation , we define the objectivei I function f as follows: implicationassociative and and negation,commutative! which are interpretedmapping respectively that I I satisfiesimplication the boundary and condition negation,S(0 which,x)= arex. interpreted An! implicator respectively bysatisfiesThe a t-norm, connectives the boundary a t-conorm, in condition fuzzy anS logics implicator(0,x)= typicallyx. Anand implicator a correspond negator. A to con- n f (↵ n,l ,u ) by a2 t-norm, a2 t-conorm, an implicator and a negator. A i=1 ii=1i f i(↵i,li,ui) I is a [0, 1]I is a[0[0, 1], 1]mapping[0, 1] thatmapping is decreasing that is decreasing in its first in its first Fig. 1.f(f)=(↵i) ffor( a)= formulaI ↵i0Iwith lowerl bound (2)l, upper bound(2) u u and 1 nectives! from classical logic,T such as conjunction, disjunction,I [α ] triangulartriangular norm norm or! t-norm or t-normT is anis increasing, an increasing, associative associativedegree ofI satisfactionI[↵i]n. n i I argument, increasingargument, in increasing its second in argument2 its second and argument that satisfies and that satisfies P IP the propertiesandandimplication commutativetheI commutative(0 properties, 0) = I(0 andI[0(0,,1)1], [00)2 negation,=, =1]I(1I(0[0, 1),,1)1] =[0 = 1whichmapping,Iand1](1,mapping1)I(1 = are, 0) that1 and = interpreted satisfies0 thatI.(1,and,0) satisfies = the 0. respectivelyand, the ! → boundaryboundarybyNA a negator condition t-norm, conditionN isT a(1 decreasing[0 t-conorm,,xT, (11])=, x)x[0 =[0,for,1]1] anx allfor implicator[0 x all, 1] in xmapping[0,in1] and.[0 Simi-, that1] a. Simi- negator. A A negator is a decreasing mapping that Fig.0 1. f (↵i) for a formula ↵i with lower bound l,1 upper bound u and ! ! 1 if li [↵i] ui satisfies N(0) = 1 and N(1) = 0. S formula. Given1 these nifIformulasli [↵i] ↵iu,i bounds (ui,li), and an satisfieslarly,Nlarly,(0)triangular a = triangular a 1 and triangularN norm(1) conorm = 0 conorm. or or t-norm t-conorm or t-conormT isS is an an increasing,is increasing, an increasing, associative degree of[↵ satisfaction] [↵i] . I  [↵i] i  I  2 2 f (↵i,li,ui)= I if [↵i] I

  • 1 [↵i] I Łukasiewicz logic, negation , conjunction! → , disjunction > i < I if [↵i] >ui Łukasiewiczsatisfiessatisfies logic, the the boundarynegation boundary, condition conjunction conditionS!(0,x,S disjunction)=(0, xx). = Anx. implicator An implicator < I if1 [↵uii]n >ui ¬ ⌦ 1 ui I boundaryand implication2 2 condition¬ are interpretedT(1,x)=⌦ as follows:x for all x in [0, 1]. Simi- I f (↵i,li,ui) and implicationI isI ais[0 a, 1][0,are1] interpreted[0, 1]![0mapping, 1] asmapping follows: that isthat decreasing is decreasing in its in first its first Fig. 1. f (αi) for a formulai=1 αIi with lower bound li, upper bound ui and ! with [↵i] representingformula.I f( :>)= the Given degree of these satisfactionn formulas of formula↵ , bounds(2) (u ,l ), and an larly,[ a↵]! triangular=1→ [↵] conorm or t-conorm S iswith an[↵ increasing,i] representingdegreeI :> of the satisfaction degreeI of[α satisfactioni] . n of formula i i i [argument,↵]argument,=1• increasing¬[↵] increasingI in itsI in second its second argument argument and that and satisfies that satisfies↵Ii under interpretation . EachPf I is a trapezoid function, with • ¬ Iassociative[↵ I] and= max([ commutative↵] +[] 1[0, 0), 1]2 [0↵,i1]undermapping interpretation that . EachinterpretationfI is a trapezoidI , we function, define with the objective function f as follows: [the↵ the properties] properties•= max([⌦I(0↵I,]I0)(0+[ =, 0)I] (0 =,I1)I1(0, 0) =, 1)II(1 =, 1)I(1 =, 1) 1 and = 1Iand(1, 0)I(1 =, 00). =aand, plateau 0. of valueI 1 whenI formula ↵i’s degree of satisfaction • ⌦ I [↵ ] I=min(1I , [↵] +[] ) ! a plateau of value 1 when formula ↵i’s degreeI of satisfaction [↵ satisfies] •=min(1N the,I[↵ boundary] +[] ) I condition[0, 1] I [0S,(01],x)=x. Anlies implicator between the given bounds, and a slope leadingn to the • A negatorAI negator[↵ isN] ais=min(1 decreasingI a decreasingI [↵] +[[0,]1], 1) [0mapping, 1] mappinglies that between that the given bounds, and a slope leading to the • 2 ! f (↵i,li,ui) [↵ I] N=min(1(0)[0! =, 1] 1 I [↵]N(1)[0+[,1] =] , 01)I I → plateauand, when the satisfaction1 lies outsideif l thesei [↵ bounds,i]i=1 uI asi • satisfies!satisfiesisI a N(0) =↵and 1 andI N(1)I mapping. = 0. that is decreasingplateau when in its the first satisfaction lies outside thesef( bounds,)= as I  (2) for formulas and! . visualised in Figure 1. This formulation[↵i] is similar to that typ- for formulas ↵ and . f (↵i,li,ui)= I ifI [↵i]
  • 1 [↵i] i i i Łukasiewicz⇥ iff ↵ logic,⇥ : negationIl (0,[↵0)] =,uI conjunction,(0 given, 1) lower = I(1 and,, disjunction1) upper =ically bounds1 usedI(1 for, 0) SAT = in0 propositionaland, logic,< whereI theif [↵ numberi] >ui≤ I ≤ Łukasiewiczthe propertiesI logic, negation¬ , conjunction⌦ , disjunctionand of satisfied clauses. is divided by1 theui number[αi] of clauses. The ⇥ iff ↵ ⇥ : l 8 [↵2] u, givenI  lower and upper bounds f (αi, li, ui) = I I if [αi] < li (3) and implicationl and u for thatare formula interpreted (usually¬ asu is follows:1, and in classical⊗ of logic satisfied clauses⊕ is divided by the number of clauses.li The 8 and2A negatorimplication IN isare a decreasing interpreted as[0, follows:1] [0, 1] mappingdifference that here isI that in case of non-satisfaction, we doI not l and u for thateven formula both l !and (usuallyu areu 1).is An1, exampleand in classical of a formula logic with three with [↵i] representing:> the degree 1 [ ofαi] satisfaction of formula [ ↵] =1 [↵→] ! differencealways here is return that in 0 case as in of SAT, non-satisfaction, but we give− gradient weI do notif information1 [αi] > uifi li [↵i] ui • satisfies N(0) = 1 and N(1) = 0. I 1 ui even both l ¬andvariables[ Iuαare] v= 1).1, v 1 An2, andI example[α]v3 in Ł ofukasiewicz a formula logic, with three with boundsalways is: return↵i 0under as in interpretation SAT, but we give. Each gradientf is information− a trapezoid function,I with I  [↵• ¬] I= max([− ↵]I +[] 1, 0) that can point an optimizationI algorithmI to satisfying[↵i] config- variables• v1, v2[α, andβv]3 in=Ł max([ukasiewiczα] + logic, [β] with1 bounds, 0) is: f (↵i,li,ui)= I if [↵i]
  • is a the trapezoid leading global to function, the with Ł[↵•ukasiewicz] =min(1I logic, ¬ negation[↵] ⌦I+[⌦] ¬,I,1) conjunction , disjunction < I if [↵i] >ui • 0.⊕5 (v v v ) 1 ¬ ⌦ I I 1 ui ![α I β] =1 min(12 I3[α] +I [β] , 1) (1) The objectivemaximaplateau functiona will plateauwhen always is the offormulated have satisfaction value a1 function suchwhen lies that valueformula outside the1 globalifαthese thei’s SATdegree bounds, of satisfaction as I and• One implication can ¬ easilyI ⌦ verify⌦are¬ that interpreted1 Iwith 1(Iv1 as)=0 follows:, 1(v2)=0 1 for formulas→↵ and . − I I I maxima willinstance always is satisfiable.have a function In that value case,1 everyif the global SAT maximum forand any formulas(v )=1isα! aand modelβ. of this formula as [ (v v visualisedlies in between Figurewith 1. the[ This↵ giveni] formulationrepresenting bounds, is and:> similar the a degree slope to that leading of typ- satisfaction to the of formula One canAn easily interpretation[ 1 verify↵3] =1 that is1 saidwith[↵] to1 be(v1 a)=0 model, 1 of(v a2)=0 set of1 formulas2 corresponds to a model of the problem. Given an1 algorithm • I I I I ¬ instance⌦ ⌦ isically satisfiable. used In for that SAT case, in propositional everyI global logic,maximum where the number and (v )=1Anv3)] interpretation¬is1 a=1 modelI . Similarly,I of thisis formula saidI2 with to be as2(v a[1 model)=0(v .6v, of2 a(v set2)=0 of formulas.7 plateau↵ wheni under the interpretation satisfaction lies. Eachoutsidef theseis a trapezoid bounds, as function, with I⇥1 iff3 ¬↵ [↵I⇥ : l ] [=↵] max([I Iu,↵ given] +[I lower¬] 1 and⌦1 upper,2I0)⌦ corresponds bounds ofof to which satisfied a model we can clauses of provethe problem. is convergence divided Given by to the an the algorithm number globalI maximum of clauses.I The v )] =1Θ 8and.• Similarly,2α2(v3)=0Θ:I lwith.2I[isα] a(v model)=0uI . too6, because(Iv )=0[ .(7v1 v2 visualised in Figure 1. This formulation1 is similar↵ to that typ- 3 l 1and uiffforI that⌦ formula2 (usually2 1 u,is given1,2 and lower2 in classical¬ and upper⌦of logic which⌦ bounds weon thiscan provefunction, convergencea we plateau have a to of complete the value global SAT maximumwhensolver. formula As we i’s degree of satisfaction ¬ I v3∀)][↵ ∈=0]I.9.=min(1≤ EvenI thoughI ≤, [↵ the] +[ formulaI ] ) is not perfectly difference here is that in case of non-satisfaction,1 we do not and 2(vl3)=0• uI.2 is a model too because u[ (v11 v2 can notically provide used such for an SAT algorithm, in propositional we are left logic, with where an the number Ievenand both¬ l forand thatu Iare formula 1). An (usually exampleI of¬is a formulaI,⌦ and⌦ in with classicalon three this function, logic we havelies a complete between SAT thesolver. given As bounds, we and a slope leading to the satisfied[↵ under] interpretation=min(1 2[,↵ the] +[ degree] of, 1) satisfaction always return 0 as in SAT, but1 we give gradient information v3)] 2 even=0.9 both. Evenl and thoughu are the 1). formula AnI example is notof perfectly a formula with threeincompleteof satisfied solver, being clauses able to is solve divided SAT bysometimes, the number but of clauses. The ¬ variablesI is• stillv1, high!v2, and enoughI v3 in toŁ meetukasiewicz the lowerI logic, bound withI l =0 boundscan.5. The not is: providethat can such point anplateau an algorithm, optimization when we are algorithmthe left satisfaction1 with to an satisfying lies config- outside these bounds, as satisfied under interpretation 2, the degree of satisfaction never concludingdifference UNSAT here is. that in case of non-satisfaction, we do not variablesforexistence formulasv1 of, v the2↵, and modelsandI v3in.1 Łukasiewiczand 2 show that logic, formula withincomplete (1) bounds is solver, is: being able to solve1 SAT sometimes, but is still high enough to meet the lowerI boundI l =0.5. The urations. visualised in Figure1 1. This formulation is similar to that typ- satisfiable.An interpretation Solving0.5 SAT(v isamountsv said tov to) be finding a1 model a model ofnever for a setthe concluding of formulas UNSATalways return. III. 0 A asLGORITHMS in SAT, but we give gradient information existence of the models 1 and 121show2 that formula3 (1) is (1) The objective1ically function used is for formulated SAT in propositionalsuch that the global logic, where the number set of formulasI given, ¬ I orI deciding⌦ ⌦ that¬ there is no interpretation In thisthat section, can point we discuss an optimization three optimization algorithm algorithms to satisfying config- satisfiable.⇥ Solvingiff ↵ SAT ⇥amounts0:.5l [ to↵(v] finding1 v2u a, model givenv3) for lower the1 and uppermaxima(1) bounds willIII. A alwaysofLGORITHMS satisfied have aclauses function is value divided1 if by the the SAT number of clauses. The Onethat can satisfies8 easily21 all verify formulas≤ that ¬ andI ⊗with that the⊗( set¬v )=0 is UNSAT≤ , (v. )=0 that we evaluateurations. and compare on a set of benchmark problems 1 set of formulasl and given,u for or deciding that formula thatI there1 (usually is noI1 interpretation1u is 1,I and1 1 2 in classicalinstance logic is satisfiable. In that case, every global maximum and One(v )=1 can easilyis a model verify of that this formulawith as(v [)( =v 0, Inv ( thisv )further =section, 0 on we in discuss thisdifference paper. three optimization here is that algorithms in case of non-satisfaction, we do not that satisfies1A. all SAT3 formulasas an and Optimization that the set Problem is1 UNSAT 1. 1 1 12 2 correspondsThe to objective a model function of the problem. is formulated Given an such algorithm that the global evenI both1 l and u are 1). AnI exampleI1 of¬ a formulathat⌦I we⌦ evaluate with andthree compare on a set of benchmark problems v3and)] 1 =11(v.3) Similarly, = 1 is a model2 with of2( thisv1)=0 formula.6, 2 as(v2[)=0(v1 .7 v2 maximaalways will always return have 0 as a in function SAT, but value we1 giveif the gradient SAT information I A.of Hillclimber which we can prove convergence to the global maximum ¬ variablesIAs we proposev1, v2 an, and optimizationI v3 inI Ł approachukasiewicz to solvingI logic, satisfia-further¬ with⊗ on bounds in⊗ this paper. is: ∞ A. SATand asv32 an)](v3 Optimization)=0= 1.. Similarly,2 is Problem a model2 with too because2(v1) =[ 0(.v61, 2(vv22) = 0.7 instancethat is satisfiable. can point Inan that optimization case, every algorithm global maximum to satisfying config- 1 IbilityI1 in fuzzy logics, we need to reformulate SAT¬ instances⌦ ⌦ onThe this first function, algorithm we we have consider a complete is a simple SAT hillclimber.solver. As we ¬ I I 1 I 1 v3and)] 2 =02(v3.)9. = Even 0.2 thoughis a model the formula too because is not[ perfectlyA.(v Hillclimber1 v2 correspondsurations. to a model of the problem. Given an algorithm As¬ we proposeIasI optimization an optimization problems, approach i.e. defining to solving a satisfia- function¬ over the⊗ Suchcan⊗ local not providesearch algorithms such an often algorithm, perform verywe are well left given with an satisfied under interpretation 2, the degree of satisfaction bility in fuzzyvsolution3)] logics,2 = space we 0. need9 such. Even0 to that.5 reformulate optimizingthough(v1 SATthe thisv2 formula functioninstancesv3) corresponds is not1The perfectly firsttheirincomplete algorithm simplicity.of(1) which we solver, The consider weThe mostbeing can objective is basicprove able a simple version to convergence solve function hillclimber. iteratively SAT is tosometimes, updates formulated the global but maximum such that the global ¬ I I 1 as optimizationis stillsatisfiedto high solving problems, underenough the SAT interpretation i.e. to meetdefininginstance. the¬ a lower A function⌦ SAT, the bound⌦problem over degree¬ l the=0 consists of.Such5 satisfaction. The of locala search candidate algorithms solution often by evaluating perform all very possible well given neighbouring1 1 2 1 never concludingon thismaxima function, UNSAT will we. have always a complete have a SAT functionsolver. value As1 weif the SAT I 1 ∞ solutionexistence spaceisa stillOne set such of⇥ high can theof that formulas enough models easily optimizing↵ verify toi1, each this meetand functionthat of2 the whichshow lower1 corresponds mustwith that bound be formula1 satisfied(v1ltheir)=0= (1) to 0 simplicity.. isa5, .solutions, The1(v2)=0 The given most a neighbourhood basic versionfunction, iteratively and updates picks the most 1 I I I I I can notinstance provide is such satisfiable. an algorithm, In that we case, are left every with global an maximum to solvingsatisfiable.existenceand thecertain SAT Solving( degree,v ofinstance.)=1 the SAT as models definedis A SATamountsa model byand anproblem to upper of finding thisshow consists and lowerformulaa that model of bound formulaa for ascandidate per the[ (1)(vimproving solution is v byone. evaluating This methodIII. all possibleA isLGORITHMS called neighbouring steepest ascent (or 1 13 1 11 2 1 incomplete2 corresponds solver, being to a able model to solve of the SAT problem.sometimes, Given but an algorithm a set set⇥ of of formulas formulasI ↵ given,i, each or of deciding whichI must that be thereI satisfied is no to interpretation a solutions,¬ given⌦In a this neighbourhood⌦ section, we function, discuss and three picks optimization the most algorithms∞ satisfiable.v3)] 1 =1 Solving. Similarly, SAT amounts2 with to finding2(v1)=0 a model.6, for2(v the2)=0never.7 concluding UNSAT . certainthat degree,¬ satisfies asI defined all formulas by an upperand∞ that andI the lower setbound isI UNSAT per improving. I that one. we This evaluate methodof and which is compare called we steepest oncan a∞ setprove ascent of benchmark convergence (or problems to the global maximum setand of formulas2(v3)=0 given,.2 oris deciding a model that there too becauseis no1 interpretation[ (v1 v2 further on in thison paper.this function,III.A weLGORITHMS have a complete SAT solver. As we that satisfiesI all formulas and that the set is UNSAT ¬. ⌦ ⌦ 1 A. SATv3)]as2 an=0 Optimization.9. Even Problem though the formula is∞ not perfectly ¬ 1 I In thiscan section, not provide we discuss such three an optimization algorithm, algorithms we are left with an satisfied under interpretation , the degree of satisfactionA. Hillclimber AsA. we SAT proposeas an an Optimization optimization approach ProblemI2 to solving satisfia- that weincomplete evaluate and solver, compare being on a set able of benchmark to solve SAT problemssometimes, but bilityis in still fuzzy∞ high logics, enough we need to to meet reformulate the lower SAT boundinstancesl =0.The5. The first algorithm we consider is a simple hillclimber. 1 As we propose an optimization approach to1 solving satisfia- further onnever in this concluding paper. UNSAT . as optimizationexistence of problems, the models i.e. definingand a functionshow over that theformulaSuch (1) local is search algorithms often perform1 very well given bility in fuzzy logics, we needI1 to reformulateI2 SAT instances solutionsatisfiable. space such Solving that optimizing SAT amounts this function to finding corresponds∞ a modeltheir for simplicity.A. the Hillclimber The most basic versionIII. iteratively ALGORITHMS updates to solvingas optimization the SAT problems,instance.1 A i.e. SAT definingproblem a function consists over of thea candidate solution by evaluating all possible neighbouring solutionset of formulas space such1 given, that oroptimizing deciding1 this that function there is corresponds no interpretationThe firstIn algorithm this section, we consider we discuss is a hillclimber. three optimization Such local algorithms a set ⇥ of formulas ↵i, each of which must be satisfied to a solutions,search given algorithms a neighbourhood often perform function, very and picks well the given most their sim- certaintothat solving degree, satisfies the as SATdefined all formulasinstance. by an upper and A thatSAT and the lowerproblem set bound is UNSAT consists per improving of. one.that This we method evaluate is and called compare steepest on ascent a set of (or benchmark problems ∞ ∞ 1 plicity. The most basic version iteratively updates a candidate a set Θ of formulas αi, each of which must be satisfied to a further on in this paper. certainA. SAT degree,asan as defined Optimization by an upper Problem and lower bound per solution by evaluating all possible neighbouring solutions, 1 given a neighbourhood function, and picks the most improv- formula. Given these n formulas αi, bounds (ui, li), and an A. Hillclimber As we propose an optimization approach to solving satisfia-ing one. This method is called steepest ascent hill climb- interpretationbility in fuzzy, we logics, define we the need objective to reformulate function f as SAT follows:instances The first algorithm we consider is a simple hillclimber. I 1 ing (or descent, when considering a minimization problem), as optimization problems,n i.e. defining a function over the Such local search algorithms often perform very well given f (αi, li, ui) because it stops when it no longer finds better solutions f( ) = i=1 I (2) solution space suchI that optimizingn this function correspondsin thetheir neighbourhood, simplicity. i.e. The you most are at basic the top version of the iteratively hill. updates to solving the SAT Pinstance. A SAT problem consists of a candidate solution by evaluating all possible neighbouring 1 1 a set ⇥ of formulas ↵i, each of which must be satisfied to a solutions, given a neighbourhood function, and picks the most certain degree, as defined by an upper and lower bound per improving one. This method is called steepest ascent (or Such algorithms are very efficient at moving quickly to a Of course, solutions need to adhere to the box-constraint local optimum. When a local optimum has been found, the for fuzzy truth values v : v [0, 1]. When a step takes the ∀ ∈ hillclimber restarts from a random location. Restart strategies algorithm outside the box defined by the possible truth values for (local) search algorithms have often been shown to improve ([0, 1]n), we force it back in by setting the violating variables an algorithm’s qualitative performance. The most cited reason to the closest value in the box, i.e. 0 or 1. for this is that performance almost always depends on starting Figure 2 shows the pseudo-code for this hillclimber. Ini- conditions, and especially the starting location in the search tially, a random interpretation (variable assignment) is gener- space, although to what extent depends on the algorithm itself. ated and assigned to the current solution . Then, as long as I The most simple hillclimber (without restart strategy) simply no model (satisfying assignment) is found or the maximum converges to the local optimum in which basin of attraction it number of allowed function evaluations is not exceeded, we has started. Other algorithms, such as CMA-ES – see below –, iteratively update the current solution. To select a neighbour have mechanisms that allow them to overcome such problems, on the hypersphere, we generate a random normalized vector although only to a certain extent. A restart strategy allows an to select a direction, and multiplied with the stepsize or radius algorithm that may have started in a very bad region of the of the hypersphere, added to the current solution, we obtain search space, with little to no chance of finding good solutions, a neighbour. If it is an improvement, we continue with that move to another location in the search space that may prove solution. If not, with probability p we continue with that to be much better. solution anyway, and otherwise we select another neighbour. In the particular hill climber we propose, we use a continu- If no improvement has been made over the past i function ous neighbourhood, where all neighbours lie in a hypersphere evaluations, we reassign the current solution to a random around the current solution at a certain distance. Because of the location in the search space, i.e. a restart. continuous neighbourhood, we cannot consider all potential neighbours, and therefore cannot definitely tell whether there 1: procedure HILLCLIMBER 2: = randomInterpretation() are no improving neighbours and whether the current solution I is a local optimum. Therefore, we determine that we are in a 3: while not satisfied and not maxEvaluations do local optimum when we have not observed an improvement 4: s = randomNormalizedVector() 5: tmp = + (s stepSize) for i iterations; at that point the hillclimber restarts. I × 6: if f(tmp) f( ) rand() < p then For the same reason that we cannot evaluate all neighbours, ≤ I || 7: = tmp we generate and evaluate only one single neighbour and decide I how to proceed solely based on that neighbour’s fitness. This 8: end if neighbour is selected by taking a random direction and taking 9: if restartCondition() then 10: = randomInterpretation() the point on the hypersphere that lies in that direction. If that I new candidate solution is an improvement or a status quo, 11: else we accept the step, otherwise, depending on probability p, 12: stepSize = updateStepsize(iteration) we accept the worsening step anyway, or we pick another 13: end if neighbour in a different direction. This added stochasticity can 14: end while help the hillclimber to escape suboptimal local optima. 15: end procedure A last mechanism is a changing neighbourhood over time Fig. 2. Hillclimber (iterations) [15]. In the experimental section, we evaluate both constant neighbourhoods (a hypersphere with fixed radius) and exponential decay neighbourhoods (a hypersphere with B. Population-based Hillclimber exponentially decaying radius). This concept of a decaying Instead of working with a single solution, we can build a neighbourhood is conceived to allow the hillclimber to initially hillclimber that works on a population of candidate solutions. make large steps to locate promising regions in the search Populations typically allow an algorithm to better estimate the space, and then gradually refine the solutions by searching local shape of the search space and make more informed de- more locally. With a decaying stepsize, the schedule is reset cisions. In this case, we implement some principles borrowed after a restart, so that the hillclimber can again start making from Evolution Strategies (ES) [16], [17], which optimize large steps to quickly improve the candidate solution. Besides an objective function by iteratively applying the principles this, we add some gaussian noise to the stepsize. of selection, mutation and recombination to a population of Note that this particular hillclimber could arguably be re- candidate solutions. ferred to as with a fixed acceptance prob- We have implemented these principles using a multivariate ability. In simulated annealing, the probability of accepting distribution with the same variance in all dimensions, i.e. no worsening steps is decayed over time, and the neighbourhood covariances. This is used to sample λ solutions for the next usually stays the same. In our algorithm the inverse is true, generation, from which the µ best are selected, which are i.e. the probability of accepting worsening steps is fixed and subsequently used to calculate the mean of the distribution the neighbourhood can be varied, by decaying the distance that will generate the next generation. This mean is simply between neighbouring solutions. the average of the µ best solutions. As with the no-population hillclimber, we allow this population-based hillclimber to Although we could also force the candidate solutions that restart when it has not encountered better solutions for a violate the box constraint back in the box, we apply a certain number of function evaluations, and we will also penalty function as suggested in [18], because CMA-ES makes look at both constant and decaying neighbourhoods (standard assumptions about the distribution of these solutions. This deviation). Boundary violations are handled in the same way penalty is quadratic in the distance between the candidate as in the previous hillclimber. Figure 3 shows pseudocode for solution and the box. The new objective function is: this algorithm. Pn f (αi,li,ui) v − i=1 I 1: procedure POP-HILLCLIMBER n , [0, 1] 4 v I ∈ 2 2: mean = randomInterpretation() f( ) = 10 θ( ( (i) 0.5) 0.5)( (i) 0.5) , I  i=1 | I − | − I − 3: while not satisfied and not maxEvaluations do  [0, 1]v P I 6∈ 4: generationλ = sampleDistribution(mean, std) (4)  5: generationλ = sort(generationλ) with θ the heaviside function (0 for negative input, otherwise 6: if restartCondition() then 1) and v the number of variables. Note the minus in the first 7: mean = randomInterpretation() part, which is necessary because CMA-ES is a minimization 8: else technique. 9: bestµ = generationλ[1..µ] We make the new function conditional instead of adding the 10: mean = avg(bestµ) penalty to the objective function as is commonly done, because 11: std = updateStd(iteration) numbers beyond [0,1] have no meaning in fuzzy logics and 12: end if prevent the formulas from being evaluated. We do realise that 13: end while now the objective function is different for the hillclimbers 14: end procedure and CMA-ES, but note that this is necessary. As we want each algorithm to perform as good as possible, we chose Fig. 3. Population-based Hillclimber not to use this penalty function for the hillclimbers, but to repair solutions, as this gives better performance. CMA-ES C. CMA-ES unfortunately does not perform well when repairing solutions and requires this penalty function. The Covariance Matrix Adaptation-Evolution Strategy Auger and Hansen have introduced a restart strategy for (CMA-ES) [13] is an algorithm belonging to the class of CMA-ES called LR-CMA-ES [19]. Restart conditions are, Evolution Strategies, and is considered to be state-of-the-art in amongst others: little or no variation in the fitness of the last x black-box optimization on a continuous domain. CMA-ES is generations, extremely small standard deviation of the normal an (µ, λ)-ES with the addition of a non-random adaptation of distribution and no change in fitness function when perturbing the multivariate distribution that generates candidate solutions. any coordinate of the mean with 0.2 standard deviation. These This additional mechanism allows CMA-ES to efficiently ex- are all indications that CMA-ES has converged and that it will plore promising search directions by adapting the distribution not very likely find better solutions in subsequent generations. to the local shape of the search space. When this happens, LR-CMA-ES restarts in another random For a full description of this algorithm, we refer the reader location in the search space. In our experiments, we will to [13]. Pseudocode is shown in Figure 4. consider both these sophisticated restart conditions, and our 1: procedure CMA-ES simple ’x function evaluations without improvement’ restart 2: mean = randomInterpretation() condition. 3: cov = initialCovarianceMatrix() IV. EXPERIMENTS 4: while not satisfied and not maxEvaluations do 5: generationλ = sampleDistribution(mean, cov) In this section, we evaluate the previously described algo- rithms on a testbed of SAT problems. These benchmark 6: generationλ = sort(generationλ) ∞ 7: if restartCondition() then problems come from [11] and [12] and can be divided into 8: mean = randomInterpretation() two problem classes, namely those with constants (formulas’ 1 2 3 9: cov = initialCovarianceMatrix() upper and lower bounds) from T4 = 0, 4 , 4 , 4 , 1 , and those 1 99 problems with constants from T100 = 0, , .., , 1 . The 10: else  100 100 latter problems were much harder for the analytical approach 11: bestµ = generationλ[1..µ]  12: mean = avg(bestµ) from [11] to solve, due to the higher granularity in constants. 13: cov = updateCovMatrix(mean, bestµ) The testbed contains 50 instances of each problem class, each 14: end if problem having 40 dimensions or variables, making for 100 15: end while problem instances in total. 16: end procedure The algorithms we will evaluate all have some parameters that need to be set, such as the stepsize (or standard deviation) Fig. 4. Covariance Matrix Adaptation Evolution Strategy in the hillclimbers, and the population size in population-based 0.9 Hillclimber Configuration Success Hillclimber Decay 0.99 Hillclimber Decay 0.999 Decay 0.99 85.4% 0.8 Hillclimber Decay 0.9999 Hillclimber Constant 0.025 Decay 0.999 84.8% Hillclimber Constant 0.25 Decay 0.9999 84.6% Hillclimber Constant 2.5 Łukasiewicz 0.7 T4 Constant 0.025 85.0% Constant 0.25 73.6% 0.6 Constant 2.5 2.2%

    0.5 Decay 0.99 95.8% Decay 0.999 94.8% 0.4 Decay 0.9999 94.0% Łukasiewicz T100 Constant 0.025 95.0% 0.3 Constant 0.25 66.8%

    Probability of solving Constant 2.5 0.4% 0.2 TABLE I 0.1 PERCENTAGE OF SUCCESSFUL RUNS FOR DIFFERENT HILLCLIMBER CONFIGURATIONS.ALL OBSERVED DIFFERENCES ARE STATISTICALLY 0 SIGNIFICANT (WILCOXONSIGNEDRANKTEST). 100 1000 10000 100000 1e+06 1e+07 Evaluations

    Fig. 5. Probability of the hillclimber solving an instance for various stepsize schedules on Łukasiewicz T4 problems. Both the hillclimbers with the decaying stepsizes and the hillclimber with the smallest constant stepsize solve an ap- 1 Hillclimber Decay 0.99 partenly similar number of instances at the end of the allowed Hillclimber Decay 0.999 0.9 Hillclimber Decay 0.9999 number of evaluations, although the hillclimber with the fastest Hillclimber Constant 0.025 Hillclimber Constant 0.25 0.8 Hillclimber Constant 2.5 decay schedule is statistically significantly better. We can differentiate between the hillclimbers with decay 0.7 schedules by looking at the average number of evaluations 0.6 necessary to solve an instance. Slower decay schedules do 0.5 not find solutions early on, but after a certain number of

    0.4 evaluations, they quickly climb to a much higher probability of solving an instance than those hillclimbers with faster decay 0.3 Probability of solving schedules. This is a result of a higher amount of exploration 0.2 with a large stepsize, allowing them to locate better regions Hillclimber Constant 2.5 0.1 in the search space before performing more local search with a small stepsize. From these experiments, we select a decay 0 100 1000 10000 100000 1e+06 1e+07 schedule with rate 0.99 to use in further experiments, as it Evaluations has a statistically significantly higher probability of solving a Fig. 6. Probability of the hillclimber solving an instance for various stepsize problem instance in the benchmark set. schedules on Łukasiewicz T100 problems. B. Population-based Hillclimber Considering the population-based hillclimber, we can both hillclimber and CMA-ES. Before we compare these algo- optimize the population size λ, as well as the ratio of selected rithms, we perform a performance analysis of each algorithm individuals versus the number of individuals in a population with various parameter settings, and select those parameter µ/λ, as well as the size of the distribution used to generate values that yield the best results. These settings are then used candidate solutions, i.e. the standard deviation or stepsize. in the final comparison. As many combinations of settings are possible, we restrict ourselves to evaluating different values for λ (10 1000), − A. Hillclimber as well as two µ/λ ratios (0.5 and 0.1). We assume that Figures 5 and 6 show the performance of the hillclimber the performance of various stepsize schedules will not differ with different stepsize schedules, either constant or exponen- much compared to that found for the previous hillclimber, and tially decaying. Table I summarises these results. When the as such, we use the 0.99 decay schedule. Restarting is after stepsize is decaying exponentially, it starts at value 2.5 and 1000 λ evaluations without improvement. × does not go lower than 0.025. These values were selected Figures 7 and 8 show the relation between population for their good performance. The probability of accepting size and hillclimber performance. Table II summarises these worsening solutions is set to 0.1. Restarts are performed results. As the population size grows, the hillclimbers perform after 1000 iterations without improvement. Evaluations are worse because of the limited function evaluation budget. Se- limited to 107. For each instance, 10 runs were performed, all lecting less individuals from the population µ/λ is also clearly algorithms using the same list of initial and restart positions better. From this analysis, we select a restarting hillclimber, (randomly generated for each run). This setup stays the same with λ = 10 and µ/λ = 0.1 for both problem classes. We also for all preliminary experiments in this paper. checked our assumption for taking the best stepsize schedule 1 1 Pop-Hillclimber µ/λ=0.5 without restart CMA-ES µ/λ=0.5 Pop-Hillclimber µ/λ=0.1 without restart CMA-ES µ/λ=0.1 Pop-Hillclimber µ/λ=0.5 with restart Restart CMA-ES µ/λ=0.5 Pop-Hillclimber µ/λ=0.1 with restart 0.95 Restart CMA-ES µ/λ=0.1 LR-CMA-ES µ/λ=0.5 0.8 LR-CMA-ES µ/λ=0.1

    0.9

    0.6 0.85

    0.8 0.4

    Probability of solving Probability of solving 0.75

    0.2 0.7

    0 0.65 10 100 1000 10 100 1000 Population Size Population Size

    Fig. 7. Probability of the population-based hillclimber solving an instance Fig. 9. Probability of CMA-ES solving an instance in function of the in function of the population size on Łukasiewicz T4 problems. population size on Łukasiewicz T4 problems.

    1 CMA-ES µ/λ=0.5 1 Pop-Hillclimber µ/λ=0.5 without restart CMA-ES µ/λ=0.1 Pop-Hillclimber µ/λ=0.1 without restart Restart CMA-ES µ/λ=0.5 Pop-Hillclimber µ/λ=0.5 with restart 0.95 Restart CMA-ES µ/λ=0.1 Pop-Hillclimber µ/λ=0.1 with restart LR-CMA-ES µ/λ=0.5 LR-CMA-ES µ/λ=0.1 0.8 0.9

    0.85 0.6

    0.8

    0.4

    Probability of solving 0.75 Probability of solving

    0.2 0.7

    0.65 0 10 100 1000 10 100 1000 Population Size Population Size Fig. 10. Probability of CMA-ES solving an instance in function of the Fig. 8. Probability of the population-based hillclimber solving an instance population size on Łukasiewicz T100 problems. in function of the population size on Łukasiewicz T100 problems.

    C. CMA-ES from the first hillclimber for the population-based one, and the Although CMA-ES is always claimed to be virtually pa- results validated our assumption (results not included in this rameter free [18], we can still play around with λ, the number paper). of offspring generated each iteration, and the ratio µ/λ, the number of individuals selected each generation versus the number of individuals in that generation as before with Pop-Hillclimber Configuration Success the population-based hillclimber. Setting λ is usually left µ = 10, µ/λ = 0.5, no restart 79.4% to the implementer of the system, although a guideline of µ = 10, µ/λ = 0.1, no restart 84.2% Łukasiewicz 4+ 3 ln(n) is suggested in [18]. For µ it is strongly suggested T4 µ = 10, µ/λ = 0.5, restart 83.8% b c µ = 10, µµ/λλ = 0.1, restart 86.0% to set it to λ/2 [13], yet we found in [12] a µ/λ ratio µ = 10, µ/λ = 0.5, no restart 84.4% of 0.1 to work better on the types of problems considered. µ = 10, µ/λ = 0.1, no restart 92.8% We perform a similar experiment as the population-based Łukasiewicz T100 µ = 10, µ/λ = 0.5, restart 90.8% hillclimber experiment before, as we compare CMA-ES, with µ = 10, µµ/λλ = 0.1, restart 97.6% and without restarting strategies (both the LR-CMA-ES and TABLE II our ’1000 λ iterations without improvement’ restart strategy) PERCENTAGE OF SUCCESSFUL RUNS FOR DIFFERENT POPULATION-BASED × HILLCLIMBER CONFIGURATIONS.ALL OBSERVED DIFFERENCES ARE using µ/λ ratios 0.1 and 0.5, and varying population sizes, to STATISTICALLY SIGNIFICANT (WILCOXONSIGNEDRANKTEST). determine the best parameters for later comparison with the hillclimbers. CMA-ES Configuration Success 1 Hillclimber µ = 100, µ/λ = 0.5 CMA-ES 79.2% Pop-Hillclimber µ = 1000, µ/λ = 0.1 CMA-ES 84.0% 0.9 CMA-ES µ = 10, µµ/λλ = 0.5, Restart CMA-ES 92.8% Łukasiewicz 0.8 T4 µ = 40, µ/λ = 0.1, Restart CMA-ES 92.4% µ = 100, µ/λ = 0.5, LR-CMA-ES 78.8% 0.7 µ = 1000, µ/λ = 0.1, LR-CMA-ES 84.0% 0.6 µ = 1000, µ/λ = 0.5 CMA-ES 79.2% µ = 20, µ/λ = 0.1 CMA-ES 82.4% 0.5 µ = 20, µ/λ = 0.5, Restart CMA-ES 93.4% Łukasiewicz T100 µ = 20, µµ/λλ = 0.1, Restart CMA-ES 95.4% 0.4 µ = 200, µ/λ = 0.5, LR-CMA-ES 79.2% 0.3 µ = 20, µ/λ = 0.1, LR-CMA-ES 84.6% Probability of solving TABLE III 0.2 ERCENTAGE OF SUCCESSFUL RUNS FOR DIFFERENT P CMA-ES 0.1 CONFIGURATIONS.ALL OBSERVED DIFFERENCES ARE STATISTICALLY SIGNIFICANT (WILCOXONSIGNEDRANKTEST). 0 10 100 1000 10000 100000 1e+06 1e+07 Evaluations

    Fig. 11. Probability of solving an instance in function of the number of Figures 9 and 10 show how the likelihood of these variants evaluations for the best population size on Łukasiewicz T4. solving a SAT instance scales with the population size on ∞ 1 the same benchmark problems as before. Table III summarises Hillclimber Pop-Hillclimber these results. 0.9 CMA-ES

    Surprisingly, the LR-CMA-ES restart strategy performs 0.8 almost identical to the standard CMA-ES, suggesting that 0.7 the conditions for restarting are not suited for the function landscapes encountered in the kind of problems considered 0.6 here. On the other hand, the simple ’restart after λ 1000 0.5 × evaluations without improvement’ strategy does improve over 0.4 CMA-ES’ performance a lot. The best performing parameter 0.3 configurations are CMA-ES with our simple restart strategy Probability of solving and λ = 10, µ/λ = 0.5 for the T4 problem class, and the 0.2 same with λ = 20, µ/λ = 0.1 for the T100 problem class. 0.1

    0 D. Final Comparison 100 1000 10000 100000 1e+06 1e+07 Given this analysis, we compare the best performing pa- Evaluations rameter configurations for the hillclimbers and CMA-ES by Fig. 12. Probability of solving an instance in function of the number of looking at the percentage of successful runs in function of evaluations for the best population size on Łukasiewicz T100. both evaluations and CPU time. The experimental setup is the same, with the same benchmark instances, and a maximum of 107 function evaluations, except now does every algorithm more solutions than both hillclimbers, although it needs on solve each instance 50 times instead of 10 in the preliminary average more evaluations to find a solution. One caveat: this experiments, for more significant results. average number of evaluations needed is higher for CMA-ES, Figures 11 and 12 show the performance of the algorithms partly because the hillclimbers start finding solutions an order discussed previously with their best parameter configurations of magnitude faster, but also because CMA-ES still manages in function of the number of function evaluations. Table to find solutions near the end of a run, in contrast to the IV summarizes these results. The most notable result we hillclimbers. Therefore, this measure must be viewed with find is that CMA-ES is marginally, but statistically signifi- caution. The same is true for the CPU time. cantly, outperformed by the population-based hillclimber on V. CONCLUSION the Łukasiewicz T100 problems in terms of successful runs, although CMA-ES needs much fewer function evaluations to In previous work, we proposed an optimization approach come to that result. Also note that this order could again be for solving satisfiability problems in the Łukasiewicz fuzzy reversed given a larger budget of function evaluations, as both logic. We applied the CMA-ES algorithm out-of-the-box, as it methods do not seem to have converged yet and thus we cannot is considered state-of-the-art in optimization on a continuous conclusively call one method better than the other. domain, and found it to be a significant contribution to The relative performance on the Łukasiewicz T4 problems the state-of-the-art in satisfiability solving in fuzzy logics, lies more in the line of expectation, where the population- as it outperformed the standard analytical approach by a based hillclimber has more successful runs than the hillclimber large margin on the hard problem class analysed. In this and needs less function evaluations to get there. CMA-ES finds paper, we analysed the performance of different optimization Algorithm Success Evaluations CPU [7] T. Lukasiewicz and U. Straccia, “Tightly integrated fuzzy Hillclimber 85.08% 169477.6 22219 description logic programs under the answer set semantics for Łukasiewicz T4 Pop-Hillclimber 86.16% 144521.8 14816 the semantic web,” in Proceedings of the 1st international CMA-ES 93.56% 309105.6 42792 conference on Web reasoning and rule systems, ser. RR’07. Berlin, Hillclimber 95.12% 224423.4 31904 Heidelberg: Springer-Verlag, 2007, pp. 289–298. [Online]. Available: Łukasiewicz T100 Pop-Hillclimber 96.96% 417474.1 54547 http://dl.acm.org/citation.cfm?id=1768725.1768750 CMA-ES 96.84% 295679.0 29516 [8] N. Madrid and M. Ojeda-Aciego, “Measuring inconsistency in fuzzy answer set semantics,” in Transactions on Fuzzy Systems, vol. 19, 2011, TABLE IV pp. 605–622. SUMMARY OF THE RESULTS FOR THE HILLCLIMBER,POPULATION-BASED [9] D. Van Nieuwenborgh, M. De Cock, and D. Vermeir, “An introduction HILLCLIMBERAND CMA-ES.PERCENTAGEOFSUCCESSFULRUNS, to fuzzy answer set programming,” Annals of Mathematics and Artificial AVERAGE NUMBER OF EVALUATIONS IN A SUCCESSFUL RUN AND CPU Intelligence, vol. 50, pp. 363–388, 2007. TIMEOFASUCCESSFULRUNARERECORDED.ALL OBSERVED [10] R. Hahnle,¨ “Many-valued logic and mixed integer programming,” Annals DIFFERENCES ARE STATISTICALLY SIGNIFICANT (WILCOXONSIGNED of Mathematics and Artificial Intelligence, vol. 12, pp. 231–263, 1994. RANKTEST). [11] S. Schockaert, J. Janssen, and D. Vermeir, “Satisfiability checking in Łukasiewicz logic as finite constraint satisfaction,” Journal of Automated Reasoning, 2012. [Online]. Available: http://dx.doi.org/10.1007/s10817- 011-9227-0 [12] T. Brys, Y.-M. De Hauwere, M. De Cock, and A. Nowe,´ “Solving algorithms, including CMA-ES, in order to test our assumption satisfiability in fuzzy logics with evolution strategies,” in Proceedings of that CMA-ES would be the best optimization algorithm for the 31st Annual North American Fuzzy Information Processing Society Meeting, 2012. these problems, an assumption based on CMA-ES’ status [13] N. Hansen, “The CMA evolution strategy: a comparing review,” in in the community. On the Łukasiewicz T4 problems, CMA- Towards a new evolutionary computation. Advances on estimation of ES proved to be the best performing algorithm, although distribution algorithms, J. Lozano, P. Larranaga, I. Inza, and E. Ben- goetxea, Eds. Springer, 2006, pp. 75–102. this result must be seen in the light of our previous work [14] P. Hajek,´ Metamathematics of Fuzzy Logic. Springer, 1998. [12], where CMA-ES did not outperform the state-of-the-art [15] N. Mladenovic´ and P. Hansen, “Variable neighborhood search,” Com- analytical approach on this problem class. The most interesting puters & Operations Research, vol. 24, no. 11, pp. 1097 – 1100, 1997. [16] T. Back,¨ F. Hoffmeister, and H.-P. Schwefel, “A survey of evolution result described in this paper concerns the Łukasiewicz T100 strategies,” in Proceedings of the Fourth International Conference on problems, where CMA-ES is outperformed by the population- Genetic Algorithms. Morgan Kaufmann, 1991, pp. 2–9. based hillclimber we described in this paper. [17] H.-G. Beyer and H.-P. Schwefel, “Evolution strategies – a comprehen- sive introduction,” Natural Computing, vol. 1, pp. 3–52, 2002. In future work, we intend to investigate how well this [18] N. Hansen and S. Kern, “Evaluating the cma evolution strategy on optimization approach works for other fuzzy logics. multimodal test functions,” in Parallel Problem Solving from Nature - PPSN VIII, ser. Lecture Notes in Computer Science, X. Yao, E. Burke, J. Lozano, J. Smith, J. Merelo-Guervos,´ J. Bullinaria, J. Rowe, P. Tino, ACKNOWLEDGMENT A. Kaban,´ and H.-P. Schwefel, Eds. Springer Berlin / Heidelberg, 2004, vol. 3242, pp. 282–291. [19] A. Auger and N. Hansen, “Performance Evaluation of an Advanced Lo- This work was partially funded by a joint VUB-UGent cal Search .” in IEEE Congress on Evolutionary Research Foundation-Flanders (FWO) project. Also, Tim Brys Computation, 2005, pp. 1777–1784. is funded by a Ph.D grant of the Research Foundation-Flanders (FWO).

    REFERENCES

    [1] L. Zhang and S. Malik, “The quest for efficient boolean satisfiability solvers,” in Computer Aided Verification, ser. Lecture Notes in Computer Science, E. Brinksma and K. Larsen, Eds. Springer Berlin / Heidelberg, 2002, vol. 2404, pp. 641–653. [2] H. Kautz and B. Selman, “Planning as satisfiability,” in Proceedings of the 10th European conference on Artificial intelligence, ser. ECAI ’92. New York, NY, USA: John Wiley & Sons, Inc., 1992, pp. 359–363. [Online]. Available: http://dl.acm.org/citation.cfm?id=145448.146725 [3] N. Tamura, A. Taga, S. Kitagawa, and M. Banbara, “Compiling finite linear CSP into SAT,” in Principles and Practice of Constraint Programming - CP 2006, ser. Lecture Notes in Computer Science, F. Benhamou, Ed. Springer Berlin / Heidelberg, 2006, vol. 4204, pp. 590–603, 10.1007/11889205 42. [Online]. Available: http://dx.doi.org/10.1007/11889205 42 [4] U. Straccia and F. Bobillo, “Mixed integer programming, general concept inclusions and fuzzy description logics,” Mathware & Soft Computing, 2007. [5] S. Schockaert, M. De Cock, and E. E. Kerre, “Spatial reasoning in a fuzzy region connection calculus,” Artificial Intelligence, vol. 173, no. 2, pp. 258 – 298, 2009. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S000437020800146X [6] J. Janssen, S. Schockaert, D. Vermeir, and M. De Cock, “Reducing fuzzy answer set programming to model finding in fuzzy logics,” CoRR, vol. abs/1104.5133, 2011.