Quick viewing(Text Mode)

Controlled Revision - an Algorithmic Approach for Belief Revision

Controlled Revision - an Algorithmic Approach for Belief Revision

Controlled Revision - An algorithmic approach for revision

DOV GABBAY, Department of Computer Science, King’s College London, Strand, London WC2R 2LS. e-mail: [email protected]

GABRIELLA PIGOZZI, Department of , University of Konstanz, 78457 Konstanz, Germany. e-mail: [email protected]

JOHN WOODS, The Abductive Systems Group, University of British Columbia, Vancouver, BC, Canada V6T 1Z1, and Department of Computer Science, King’s College, London, Strand, London WC2R 2LS e-mail: [email protected]

Abstract

This paper provides algorithmic options for belief revision of a database receiving an infinite stream of inputs. At

¡£¢ ¢

stage , the database is , receiving the input ¤ . The revision algorithms for moving to the new database

¡ ¡

¢ ¢ ¢¦¥¨§ ©

¤ take into account the history of previous revisions actually executed as well as possible revision options which were discarded at the time but may now be pursued. The appropriate methodology for carrying this out is Labelled Deductive Systems.

1 Background The present paper offers an algorithmic dynamic model of revision, as part of our dynamic approach to practical reasoning. This model contains the revision algorithm as part of the logic. The methodology that an essential part of a logical system is an algorithm for the logic as well as for additional mechanisms such as abduction and revision, has been strongly put forward in [8, 9, 11] and seriously developed in [13, 14]. The idea that formal modelling of an epistemic change cannot ignore the algorithmic approach, or other computational notions, is recognized also by other active researchers in the community (see for example, Shoham and Cousins [31]).

A major component of any practical reasoning system is the representation of the epistemic

change occasioned by the addition of new information to an existing set of data  . When



a consistent theory  receives a non-contradictory wff as input, the new theory 

will simply be  if it is consistent. A more interesting case is when the new piece

 of information causes inconsistency, in which case  needs to be revised to a which consistently includes . The main application area for revision theory is commonsense rea- soning, which involves a current database and a sequence of different inputs    "!#%$&$%$ . The seminal work in this area is due to Alchourron,´ Gardenfors¨ and Makinson (see [15] for an extended introduction to belief revision). The AGM approach does not, however, provide specialized algorithms for revision. Moreover, no special attention (in the form of properties)

J. Logic Computat., Vol. 12 No. DMC1, pp. 1–21 2002 ' c Oxford University Press 2 Controlled Revision - An algorithmic approach for belief revision is accorded to the iterated revision process. In order to choose which propositions to remove a system called epistemic entrenchment has been proposed in [17]. Intuitively, a total ordering is defined over formulas, so that less entrenched beliefs are retracted in preference to those more entrenched. The system, however, does not provide a mechanism which defines how the ordering should change after a revision takes place; nor does it indicate where the ordering itself comes from. The postulates for epistemic change proposed by Alchourron,´ Gardenfors¨ and Makinson had the undeniable merit of having provoked formal reflections in an area which has been ac- tive in both philosophy and artificial intelligence since the early 1980s. It should be noticed that the AGM approach cannot be applied in computational contexts, since belief sets (which are deductively closed) are too large. For this , belief bases have been considered in- stead of belief sets. We share Friedman and Halpern’s view [7] that it is no longer sufficient simply to propose a new set of rational axioms for belief revision and to prove some con- sequences from it. Our approach attempts instead to fill the gap in AGM theory, providing a reasoned mechanism which selects the data to be removed. We suggest that, even in the simplest cases, it is hard to imagine that we can revise our knowledge set in one way only. Se- lection among several revision options is assigned to specific policies, some of which can be specified by the application area. Time is also encoded in our model, and the ordering among options derives in part from the history of past revisions, and partly from consideration of economic factors or the nature of the inputs (as, for example, when we receive the same input several times). This means that we need a richer structure for the set of data: and we find that Labelled Deductive Systems (LDS) theory [8] proves to be a framework in which these features can be implemented attractively, and an appropriate revision algorithm defined. We call the algorithm which takes account of all the above components Controlled Revision ( ). One of the attractions of a Labelled Deductive Systems approach to Controlled Revision is that the labels give us a logical means of control and representation, which facilitates the development of principles of Controlled Revision with relative ease and generality. The labels have multiple roles. One is to help define the properties of the selected logic, and the second to annotate and control the iterated revision process. In particular, when data are rejected from the database, labels allow us to retrace and to retract the consequences of the data which have been just rejected and which are not otherwise supported.

Another notable feature of Controlled Revision can be seen as follows: In the literature, 

when  is inconsistent, non-tolerant approaches to inconsistency assume two main forms:

1. reject the input which causes inconsistencies (non-insistent policy);

¢¡¤£ 

2. maintain , identifying the subset  such that, when rejected, the consistency is

¦¥¨§ 

  ©  ¡   restored (insistent policy). Then  .

We will show that in Controlled Revision both the options are possible and that the algo- rithm chooses the best option according to the parameters recorded in labels and the defined policies. Some work compatible with our approach to revision has already been done. We refer, in particular, to the Truth Maintenance System (TMS) approach, proposed by Doyle in [4]. The main idea of this orientation (and, at the same time, the main difference from AGM theory — see [5]) is that the set of data to be revised exhibits additional structure in which certain beliefs (i.e. wffs) are annotated as justifying other beliefs. Our own proposal shares with the foundations approach the aim of defining a procedure for identifying candidate wffs for Controlled Revision - An algorithmic approach for belief revision 3 consistency-restoring revision. As Gardenfors¨ and Rott have noticed [18], TMS performs belief changes differently from the way they are achieved in AGM. TMS executes a direct and procedural belief revision, in which no role is played by rationality assumptions and revision reduces to inference relations which are usually non-classical (e.g., paraconsistent and non-monotonic). One of the main difficulties with TMS is the handling of beliefs responsible for contra- dictions, known as dependency-directed backtracking. It has been criticized for insufficient control of identifications of nodes responsible for contradictions. One of the main advantages of our model is that it is able to find the best solution for the management of inconsistency problems, as they arise in their considerable variety.

2 Introduction This section explains the ideas and concepts of this paper through an example. Imagine a police inspector investigating some serious fraud case which has made headline news. He is expected to deliver a report outlining the details of the fraud. His investiga-

tions stretch over a period of time during which he accumulates the information sequence

¢¡ ¤£ ¤¥ ¦¡

  &$ $ $

  ! 

which we denote by . The expression denotes schematically the se-

¡

&$ $ $ ¨§

© 

quence  which is the batch of information he gets at stage . We allow for the same



information to come from different sources, so may be equal to . However the news is so hot that, each time a new piece of information is uncovered, a new interim report is expected. In order to do this, the inspector has to perform three operations: 1. Evaluate the information at each stage to try and build the current coherent picture of what has happened. (For simplicity, let us assume the information arrives one well formed formula (wff) at a time.) 2. Send interim report to describe what he currently thinks has happened. 3. Keep full record of what piece of information came from where and at what time.

The inspector therefore labels his information by double labels. Each item is labelled

§   

. identifies the piece of information with respect to identification tag of source,



§  reliability, etc. and is the time it has arrived. At each time  , a theory is formed (the interim report) being formally a consistent subset of all the information received up to time

 . The flow of information is then represented by a sequence of evolving databases. The 

initial state is called  and it can be the empty set or it contains the set of fixed integrity  constraints (a consistent set, not to be revised). In our example,  contains some rules

of commonsense knowledge the inspector needs to use in order to manage the information

§ he collects. Let us assume the interim report is  . Of course, information arriving at

time  may be rejected as unreliable (because of inconsistency with other sources) but a new

© 

information  uncovered at time may not only be accepted but cause the acceptance

of the previous information . Therefore, at stage  , the inspector labels his information as

§  §¨ §  &$ $ $  "! 

©

  

follows:  , where . is the stage when arrived (i.e.

¥ $# $% &!'%(! % %

 ©

) and , for  , indicates whether it was accepted ( ) or rejected ( ) *) at  .

The above is the model of belief revision we are using. Since the inspector needs to keep §

a full record of the past and present information, the database  will list all the wffs, both

© + in and out at stage  . When the investigator receives a new information at stage , he updates the history labels of the formulas in the new database. An interesting consequence of 4 Controlled Revision - An algorithmic approach for belief revision the record of the history of past revisions, is that in our model we can allow the reinstatement

of a formula. Consider the following case:

   

  ¥ §   ¥

©

   

    



 

1. Start with . We get input . Name it and let .



 

2.  comes in. Name it . It is inconsistent with ; so it excludes it. This gives:

   



¥ §  §   §   $

©

   

   ©¢¡  £¡ 

   



!  

3. ! enters. Name it . It is inconsistent with , but we want to restore . Thus:

     

! ¥ §  §    §  §   §   $

©

    ! !

   ©¢¡ ¥¤  £¡ ©¦¤  ¥¤ 





Here is reinstated at stage ¤ . The labelling allows us to remember what was rejected

and when.

§

   

Consider again the interim report  . How are we going to present it? If the inputs

&$%$&$& &$&$%$&  $ $ $

 

§

§ §

were , then  is a subsequence of formulas which are in the

¡



§ ¥ &$ $ $  ¥

©¨

¡

  $ £§

database at stage , i.e. have in the history labels, say, ¨ ,

 

! ! $ $ $ !  +! § &$ $ $ 

©

§

   £§

. We can present as  , but we prefer to include the

§

  

full labelling with the full information. Thus  is presented as the list of all the wffs, both

§  &$ $ $  §   §  

  

 © ©

§

§ § §

   

in and out  : , where means that is the



information label of and gives the revision history of .

 ¥ §  &$ $ $  % % %

 © We call the label  active at if (as opposed to ) is in the sequence. If we use our labels in the above form, we need to indicate how to propagate labels when

we do deduction. The general modus ponens rule for LDS has the form:

   § 

 

  

§ 





 

 is a compatibility predicate between labels (if it does not hold we cannot execute the

§  







modus ponens). gives the new label of  as a function of the labels of and .

 

To simplify, we assume our tagging labels  are atomic and so they accumulate and

 §  § 

 always holds in this case. We accumulate also our history labels in .

means being active at the same time. Thus we let:

§   §"! $# %  §#

  

§&!   §'#

 

We accumulate tagging labels by concatenation and accumulate history labels as pairs

§#

. This is just a convenient notation. We now need to explain . We already said that

§¨&$ $ $  % %



 is active at all points for which is in the sequence. Let us define by

§'# % # 

induction that is active at all times such that both and are active. We define

§"( ) ( )

to mean have some common active time. We now need to explain how this labelling changes facing a new input. We explain it by

example and, simplifying, we assume we get one input formula at each stage. Suppose the

§ ¦¨ &$ $ $  §  

§ § § §

   database  has the form . Under the assumption that at

each stage only one well formed formula comes as input, and that formulas are tagged by

# # # # #

§  §  § ¨  § &$ $ $ 

©



   

different atomic names, then has the form .

# #

$% "! % !



)  is atomic name and , , indicates whether was accepted in  or not.

Controlled Revision - An algorithmic approach for belief revision 5



Suppose now that § comes in as a new input. Assume now that our revision policy accepts

¢¡ ¡

&$ $ $  ! $ $ $  !

© ©

¦¡

¨ 

, with  as the new consistent database (interim report).

#  # #

§ § §  §" ! !

© ©



§

    

Then in the new database we have wffs ,  , and

§ ¦  §  §  ¥ &$ $ $ 

© ©



§ §

    

 , where is used for the accepted cases of and

§

©

& £

otherwise © . We will use a function to give us the new label, i.e. to take us from

§ #  # § #  §" # § §

©

 

   to the appropriate .

3 Formal presentation of the model

  §¦ %$ $ $

¥  Let ¤ be a finitary propositional language with and . are wffs, i.e., data or

beliefs. We can base our logic on Johansson’s minimal logic, with an intuitionistic  and ¥

with an arbitrary absurd constant ¥ with no special axioms for it. This means that does

 ©¨ ¥

not satisfy the axiom , for every ¤ . If we add this axiom it would become an

¥   ¥ intuitionistic negation. Negation is instead defined .

Here ¥ does not necessarily mean falsity, but rather is an atomic symbol which we do not



want to derive. We use it to indicate inconsistency conditions in the form ¥ which



¥ 

means that should not be derivable. Similarly, with  we indicate that is inconsis-

 

 §  &$ $ $ 

§ ¥

tent. We can also specify integrity constraints of the form , meaning

 

 §  $ $ $  § 

§

¥ . We assume that integrity constraints are not to be revised  in time and therefore they are specified in the initial theory  .

DEFINITION 3.1 (Labels and histories) 

1. Let  be an infinite set of atomic labels. The elements of are referred to as atomic

(  %$ $ $

resource labels. We also allow variable labels ( .

# # 

 ¥ §  &$ $ $ § ! '!

©

§



2. Let  be a set of sequences of the form where and

  ! % ! % % 

)

 © 

(  ) is either or . We say is the starting time of the sequence,

¥ %  ¥ % 

)

  is the current time of the sequence and the set  is the set of active

moments of  . It will be convenient for the proof theory also to allow for variable labels

 #

 , which are active at only.

§  ¨ ¥ ¨

 

 #   3. An atomic label has the form , , . Let "! . 4. Complex labels are defined by induction:

(i) Atomic labels are complex.

       

§"( ( §') ) §"( )  §'( )

(ii) If and are complex labels, then is a complex la-

¥

¡ £ ¡ £ ¡ ¡ £ £ ¡ £ ¡ £

$% % $& & $% & '% & (% % $& &

! ! !! !*) bel. Assuming and are defined, let ! .

,+-/.102+ gives the times in which the data with these labels are active.

 

DEFINITION 3.2 (Databases) 

¥ § 

&$ $ $  § 



    

© © ©

§

§ § §

   & 

1. A database at time has the form , where

§  ¨  ¨

 

# # # # # #

   ¤

is built up from atomic labels , and is a wff of . Note

! !43  

©

#

§

that , , are the inputs at stage . We assume the starting time of is and

 #

the end time is  . The labels are all pairwise different.

! % ! 3¢¥ %$ $ $  3  

© ©

§

)

   )

2. We say is consistent if for each , , the set ) is active



 

at m is consistent in ¤ with the integrity constraints of .

¥65

3. For  the initial database contains integrity constraints. They are always active and

§17 87 7

we shall see later that it is convenient to label them by , where is a special symbol, the empty label. 6 Controlled Revision - An algorithmic approach for belief revision

EFINITION 

D 3.3 (Input and revision) ¡

©

&$ $ $  





§



§

§ 

Let  be a consistent database and let be all the new input formulas.

  

¥ § %$ $ $  ! ! ! $ $ $ !  !

¡

¨

© ©

§

¡

§

 

Let ¨ , , be a consistent selection



¡





§ &$ $ $ 

¡





§ §

from . The selection is done by our revision policy to be described in a

 

§ § 

later section (see beginning of Section 4). We define a new database  as follows (



¡

depends on § ):

     

¡ ¡ ¡

© © ©



 §'  §  &$ $ $  §   ¥ §  §' &$ $ $  §

© © ©

   

     

  

§

§ § § § § §

§ § §

 £     £ 

where the following holds:



  ¥ § ¥

¡

©

    

§ § § §

§

§ § § §

  

1. is a new atom from and  if is in and

!  !3  §

© ©

§

 

© otherwise, for . All the new atoms are pairwise different.



§" ¥  § § ! % ! ! ! 3

¡

© © ©

§ § §

§

)

£     

) )

2. For each , , if ) is in and

 

§" ¥% § §

¡ ¡

©

§ §

§ §

£  © & £ ) ) otherwise. Note that depends on and depends on our controlled revision policy.

DEFINITION 3.4 (Hypothetical input)

 

§ 

Let  be a database. Suppose we want to prove from it that follows at stage .

 

This means that  can be proved from all the data which are active at . To show

 

 , we need to add (input) A into the database and prove . We therefore need to define

#

(

§ 

the hypothetical input  , using a new variable label to name the input . We let

#

§ 

 be defined as:

(  §   §  § %$ $ $  

©

§

    ©  ©  ¡ ©  

# 

It is convenient to introduce variables  which are active only at . Thus we can write:

# ¥ §'(  # 



     

and thus we need not to know what is the stage of  .

REMARK 3.5

1. Note that the result of the hypothetical input of Definition 3.4 may be inconsistent. Given

# #

 ( 



   simply puts in with a new name and a variable label active at . So

the set of all wffs active at  may now be inconsistent. 

2. This same procedure may be used for testing inconsistency. Suppose we are at stage

¡



%$&$%$&





§

and new inputs § come to us. How can we tell whether they are consistent

§£¢ 





§

§ §

§

 with  ? We can put them in with new test labels making them all active

at time  and using the proof theory to test for consistency. If they are consistent we can



¡

input them properly. Otherwise we define § using our revision algorithm. All of this

will be explained later. 

DEFINITION 3.6 ( rules)

§ 

§ §¥¤

 

Let  be a database. We want to define the notion of . First we need

 ¤

to define the elimination and introduction rules on which depends. Assume ¦ is a

  

labelling algebra for the language with and ¥ with compatibility relation . Assume an

(©¨( §"( ! §'( §  



 

§ ¦  ¦ Exit function on the algebra and a binary function . Then the  elimination and introduction rules for this algebra are as follows:

Controlled Revision - An algorithmic approach for belief revision 7

      §"  



  

§"  





 

¢¡   (  §' ( (

 

: To show  , assume and for a new variable and prove

! §"( ! §'( ( §'( ! §'( ¥ 

 

for a label depending on such that Exit .

§ #   ¥     

In our example, since our labelling is a resource labelling and , the Exit

§"( ! §'( ¥ ! §"( ( (

©

function is Exit which is equal (by definition) the result of deleting

# # # #

! §"( §  ! § ¥ ! §

  

 1

© from . Similarly Exit . Assuming the above, here are the rules

for our Controlled Revision:

§ #   §       §      # §'   



¤£

  

   

§   §" 



 

# §' #¨  

Where says that both are active at .

# # #

¥¡ §   §"(  (

 

  

: To show , assume with new variables and and prove

# # #

§"! §'( $# §  ¥ ¨( §"(  ! §"(  ¥# §

  



  © with and .

DEFINITION 3.7 (Proof Theory)

 §  %

We now specify when (at a certain stage ) a set of wffs  proves : with maximal

  ¥

number of applications of -introduction and maximal  -eliminations. For a wff or .

§ 7 87

 

We give the integrity constraints in  the empty labels with the understanding

7  ¥  §17  ¥ 

*

that and . The members of  are always active. We use the notation

§¦ 5

(for :

#

§ 

¤

§

 

)

#

#

§  §  6¨  

¤

  



1.  if : and is active at .

# #

   

 §   §  

¡ ¡ £ £

¤ ¤

§ §

    

# )

2. -elimination rule: If ) and both

#        

§"   §   § §   §" 



¡ £ ¡ £

¤

  

¨ ©

§ §

 '

) ) !

and hold, then ! .

#

# # # #

 § § §"( (  §" §  §"( 

  



¤

§

 © ©     

#

3. -introduction rule: ) if

# #

(   § §'(  §

 

¤

§

  ) where are new variables and we assume and hold as

required. 

The logic we have defined so far is a logic with only and ¥ . However, the mechanism of Controlled Revision that we will soon define is applicable to any logic ¤ defined by LDS.

1 ©

 

The deletion process can be formally defined as follows. Let denote the empty label, satisfying ,

  ©

 . Then:

©

1. 

      

§ © §

     

2. 

       ! 

§ © §

    "  3.  8 Controlled Revision - An algorithmic approach for belief revision

DEFINITION 3.8 (Inconsistent Database)





Let be a set of labels# active at some . We say that is -inconsistent if for some

%     §   ¨

2

¤

§

   ¥

we have: ) with .



§ § 

Assume we are at stage  and we have a consistent theory . We get an input . What



§ ¢ 

 

§ § § 

do we do? First we perform a consistency check: We insert the input as . is

¢ ¥ § ¢ 



§ §

§

¢¡    

an active label at  and a new temporary name for it. Call the result





§ . Because of future references to this construction, we shall call it temporary input



§ §

§

£¡

of a formula into  , forming , to test for consistency. See item 2 of Remark



¡

3.5. If this is consistent, then we proceed as in Definition 3.3, with § equal the set of

  

§ § § § 

active elements of  together with (of course will get a new label in as

§ 

§

¤

§ §

¤¡ ¤¡  ¥

specified in Definition 3.3). If  is inconsistent, this means for various

§ 

§

§

¥ £¡

. The labels tell us the possible candidate subsets of to be rejected (i.e. to be



§ ¥ made inactive in the newly revised  ) in order to make not derivable. The task of the Controlled Revision algorithm is to select the best option. We can use the labels to identify

and choose these options. For simplicity assume inconsistency means proving ¥ with any

label.

First let us note that it is possible to have different proofs for a formula . For example, might be derived by modus ponens from different formulas, and it might also be obtained as a past input. Labels contain all such information: If the same formula can be proved with different complex labels, this means that it is proved from possibly different data and/or in different ways. Rejecting one of these proofs may not block other derivations of , in which

the de-activated data do not appear. The labels used in Controlled Revision keep track of the ¦

history of each proof. Suppose, for example, that at a certain stage  , can be obtained in

 )

several ways ( ¦ is both an independent input with label , and the result of a rule with

§ ¨§ §  §¨§  

§ §

 © ¦  ©  ¦ ¥ label from and ), and let § , then we will have

that:

§ §

§ § §    §'  

§ § §

¤

 § §  ¥

§ §   §" 

§

¤

) )

 § §  ¥

DEFINITION 3.9 (Inconsistent set of labels)

¥

§ § §



Let  be a set of labels active at , where are labels for names and

   ¨  ¨

§ §

are labels for the history. If for some , with and we have that

§ 

§ §

¤

*¥  , we say that is the set of labels showing the inconsistency. To maintain consistency we must block all proofs indicated by these labels. This means throwing out some of these data. The labels tell us exactly how many time and in what form

this data was used to derive ¥ . Also we know the history of the data. This information can be used by our revision algorithm to decide which item to reject.

To this purpose we define an option set of labels as follows:3

2 Note that the notion of inconsistency so far discussed was that  is provable with any label. Definition 3.8

allows us to refine this notion. We may consider a database inconsistent provided it proves  with only certain

labels but not any label; for example from active labels which have always been active, but not from active labels

¢

©

 which have been inactive in the past! In fact we can make  dependent on the stage . 3This definition calls to mind Reiter’s hitting sets [29]. See also [32] for the relation between hitting sets and Hansson’s kernel sets. Controlled Revision - An algorithmic approach for belief revision 9

DEFINITION 3.10 (Option sets of labels)

§   ¨ 

¢¡

§ §

¤

*¥

For , we know that for some ,  . Let be the set of all atomic



¤¡

£ £

names from  in . Let be the collection of all sets of labels . An option set for is a

¥ ¨

¡ ¡ ¡

¥ £

set ¥§¦©¨   such that for every and such that this condition does ¥

not hold for any other proper subsets of ¥ . If we deactivate all data with labels in , then all 

proofs of ¥ at will be blocked.

 ¥    ¥  ¨§ 

¥  ¥  )

In the above example, the option sets of labels are: § , and

! ¥    3 ¥ %$ $ $  3 ¥ %$ $ $ 

© ©

§

 ) ¥ ¥

¥ . Every option set of labels ( ) identifies a set ( )

 ¥ § 

 ¥ ¥  §

whose elements are labelled formulas with names in . We then have that §

   ¥ §   § §  §  ! ¥ §   §   

§ §

¦ ¥  ) )  ¦  © ¥  ) )  ¦  © ¦

, ¥ and .

§

§

¡

¥  Having chosen what set to reject from the active part of , we can now try and

reinstate some formulas rejected at previous stages. Here our Controlled Revision policy can

©

©  © ¡ again have a say. We may decide to reinstate first from rejections at  , then at , etc or we can use some other criteria on all rejected formulas at all previous stage. The next section gives us some options.

4 Policies for inconsistency In the discussion at the end of the previous section, we have suggested the following outline

for an algorithm.

 

¥ §£¢  



§ § § § §

§

¤¡    

Given  and an input , consider and test for consis-

 

¡

§

§ § §

¤

§

£¡ 

tency. If  is consistent, put into as described in Definition 3.3 for being

 

 §  

  

§ § §

¨ ¨

§

     £¡

 is active in . If is not consistent, then use a selection

¡



§ §

¡ ¡

algorithm to reject some of the active elements of  and let be what remains.

¡



§

We can now add to ¡ some of the previously rejected formulas using a reinstating

 

¡

§ § algorithm to form a new which we will now use to define  as done in Definition 3.3. We see that we need to specify two algorithms: The selection algorithm and the reinstating algorithm. This section will give an example to illustrate some of our options for these algorithms, as well as a proposal for such algorithms.4 It is important to emphasize that, depending on the applications, different policies can be adopted. In the following example we will show that it is often desirable to have a combina- tion of different pure options. The first policy we want to consider is a very common principle

in belief revision, namely, that new information receives the highest priority. This principle



affect both selection and reinstatement algorithms. The selection must not throw away §



 &$ $ $

§ and the reinstatement works backwards § . We call this policy option the input priority option. If we accept this policy, the record of the history of every formula gives a priority ordering on the information in the database. We will show that this gives undesirable

(in the sense of arbitrary) results.5

 

§ § 

If we accept this, we can reject one of two labelled conflicting formulas  and

¡

 

§ §

just comparing when they first came into  : we say that just in case that is

more recent than § . To show that, using only this pure policy, we can end in counterintuitive

¡

 

revisions, we assume that a linear ordering exists such that for every couple of natural

¡

3 3 number , such that . Furthermore, the problem to address is how we can select a

4Inconsistency-management is also discussed in Gabbay and Woods [14, 13] and Woods [33]. 5In a future paper we will show how this requires a deeper consideration of the AGM success postulate. The success postulate is problematic and has been discussed also in [2] and [7]. 10 Controlled Revision - An algorithmic approach for belief revision revision over another when the criteria are all partial. We then need a way of combining all partial options in a unique algorithm. Controlled Revision can do exactly this: Take into consideration several parameters and provide a unique tool for them. In this section we assume for simplicity to receive one input per time. EXAMPLE 4.1

Suppose that the initial knowledge base is:

¥



and the input stream is:

!  

¦   ¨ £¢ % § ¦  ¨ ¥¤   §¦  §¨  ¦  $ ¥  ¦ 

¡ ¡

   ¥  ¥      

After the first input we get:

 

¥ §  §  ¦ 

¡

©

   

 §£¢ #  ¦  ¥  §£¢   ¦  

 



 ¤¡  ¥     ¥ Given and the new input , consider

to test for consistency. Since it is consistent we obtain:

  

¥ §  §   ¦  §  § ¦  

¡

©

   £¡  £¡  ¥

We now address the third input. Let:



¥ §  §   ¦ 

¡

©



£¡   £¡ 



§  § ¦  

£¡  ¥

§ ¢    



¡



 £¡

 can prove:

  

§ §£¢   §  §  ¦



©

 £¡ 

and therefore:

 § § § ¢       § §   §   §



©



¤

 ¡  £¡ £¡ "¥ 

If we accept the principle that the highest priority has to be assigned to the most recent

¡ ¡

  3

input, we should assume a linear ordering among labels such that , for every .

   

§ § ¢     ¦

¡

 ¥ 

Therefore to block the derivability of we should reject and

  ¦  ¥ maintain  and the new input. The labels of the history would have been updated,

the new input is provided with new labels and the new database would then become:

! 

¥ §  §    ¦ 

¡

©

   £¡ ©¦¤ 



§  §  ¦  

£¡ ¥¤  ¥

!

§  § 

¡

¥¤ 

¦ ¦ ! 6

Note that we can still formally prove , but that there is no active proof of at  :

6It is interesting to compare this result from the perspective of a different revision policy: the compromise revision

 ©

¤ ¤

 [9]. If gives inconsistency, we try to maintain and all the formulas whose presence does not imply

©

¡ ¤ inconsistency (even those whose proofs depend on ¤ ), and a new consistent set is defined. Compromise revision (which will not be used in the general model for Controlled Revision we want to introduce here, but can

become one of the policies in future developments of the model) allows us to restore consistency by rejecting an

¤

inconsistency-causing formula  . At the same time it compromises and keeps some of the consequences derived

&¤  ¤ from  , namely, all those formulas that require in their proof, but do not lead to inconsistency.

Controlled Revision - An algorithmic approach for belief revision 11

 § ¦ 

At the fourth stage the new input ¥ does not causes inconsistency, so we

provide the new information of a name and we update the history labels of all the other data ¢

in the database to move to  :



¢ ¥ §  §     ¦ 

¡

©

   £¡ ©¦¤ © 



§  §   ¦  

£¡ ¥¤   ¥

!

§  §  

¡

¥¤  

§ ¢  §  §1¦  

  ¥



¢ 

 is ready to receive new data. After the inputs and , which are consistent, we

¤ ¦ ¦

 

get  and . Let us write in detail:



¦ ¥ §  §       ¦ 

¡

©

   £¡ ©¦¤ © ©¢¡ ©¤£ 



§  §     ¦  

£¡ ¥¤  ¥¡ ¦£  ¥

!

§  §    

¡

¥¤  ¥¡ ¦£ 

§  §    § ¦  

¢

 ¥¡ ¦£  ¥

§  §   

¤

¥¡ ¦£  

§  § 

¦

¦£  ¦

 can prove:

§ §   § §   §

¤ ¦

¥¡ ¦£ ¦£  

§ §   § §    § ¦ 

¢ ¦

 ¥¡ ¦£ ¦£  ¥



¦  

Therefore, ¥ is provable with different labels, depending on how it was proved.

¦  § ¢  ¦

says that ¥ was obtained independently as an input, while from we know that

¦   ¢  ¦

¥ was obtained via modus ponens. In this way, if in a future stage or is rejected,  we can nevertheless maintain  . To show that the input priority option (common in the literature on belief revision) is not

as such the most desirable, and why we should also take into consideration the history of past

¨  ¦ 

revisions, let us consider now the input  comes into the set of data. We test for

§£¢ ¦# ¦  ¦



¦

¤¡  

consistency by forming  with . We find it to be inconsistent, so we have

¦ £¡

that  can prove (to simplify, we don’t write the history labels):

! 

§ §£¢ ¦ § ¤  ¦

¦

¤

 £¡  ¥

¢ § § ¢  ¦ § ¦ § ¤  ¦

¦

¤

£¡   ¥

We have several options for removing the inconsistency:

1. Insistent input policy (which is the most natural for input priority input policies.)



 ¦   ¦ 

We keep the input  . To maintain consistency four options are available:

 ¢  § ¦   ¦

 ¥ 

¥ could be rejected together with or . Other options are to

 ¤   ¦ § ¤  ¦

¦

   

de-activate  , or so that the proof of is blocked. Therefore,

¦

¥ £¡

the options for the subsets of to reject are:

¦

¥   ¦     §1¦  

¢



  ¥  ¥ ¥

¦



¥  ¦    ¦ 



 ¥  ¥ 

¦

¥  ¤  

!

   ¥

¦

¥  

¦

¢

  ¥

12 Controlled Revision - An algorithmic approach for belief revision

§ §

¥ ¥

We want to define a preference ordering among the different options. intu-¦

§ §



¥ ¥ ¥

itively means that we prefer to refuse than . Which revision should we select?

¦ ¦ ¦

contains formulas with lower priorities than the others but, since¦ we also want to maintain

!  

¢

¥ ¥ ¥ as many data as possible, it might be preferable to reject ¥ or rather than or . It is clear then that since policies such as input priority or minimal change do not receive a uniform treatment, it is not possible to decide on an ordering among the options. In our algorithm parameters, such as the persistence of a formula (how long a formula has

been in the database), the number of changes +/- in the history of each formula (in other

!

developments of the model [27], a degree of reliability for each¦ formula) determine ¥

the best subset of data to de-activate. Depending on which set we choose to reject,



  ¦

¡

¦ ¦

we may be able to reinstate  (we adopt the principle to reinstate as much as

 ¦

¡

  ¥ possible). In fact, choosing ¥ or will allow us to reinstate . It is important to observe that this intuitive principle is not considered in the traditional approaches to revision, since they do not record the history of beliefs.

2. Not-insistent input policy. Among the options to refuse, Controlled Revision algorithm

§£¢ ¦  ¦  ¦

 ¨ considers also rejecting the new input , so that the choice between insistent and not-insistent input policy is automatized.

Inputs conflicting with integrity constraints We now discuss the case where the input is inconsistent with the integrity constraints through an example.

EXAMPLE 4.2

Let  be a stage of the evolving database. There are basically two different kinds of conflict-

ing inputs (let assume that an input cannot be itself contradictory): § (a) A new input directly conflicts with some integrity constraints in  . An example of direct

conflict is the following:

¥   ¥

integrity 

input  § (b) A new input indirectly conflicts with some of the integrity constraints in  . An example

is:

¥  

§

  

¥  

 ¥

integrity 

input 

Resolution of the Conflict We now propose a resolution strategy for the inconsistency arising from conflicting inputs in the above cases.

(a) When an input directly conflicts with an integrity constraint, we reject the input (it is commonly accepted that integrity constraints have the highest priority, and do not change over time), even if we have an insistent input policy. Controlled Revision - An algorithmic approach for belief revision 13

(b) In the case of indirect conflict, there are other formulas involved in the inconsistency besides the integrity constraints. Let the Controlled Revision algorithm decide what to

reject.

§

§

¡

 ¥

Going back to the general case, where is inconsistent and we have a choice of ,

3 ¥  %$ $ $ ©

¡ , to reject. How do we choose between them? The following are some policy

considerations.

¨

§

¦ ¥ ¦

1. The history of a formula allows us to know how many times was in or out

¦  $&$&$%

  of  . This is an important element to evaluate what we call the persistence priority: The longer a formula has been in, the more reliable it is. In Example 4.1 we have shown that the input priority relation alone gives inconclusive results, which can be corrected by considering the whole history of a formula.7 2. The number of formulas one should give up in each option (economic policy). 3. The number of changes +/- in the history of data. It is clear that it is preferable to de- activate the data with higher number of changes, because less ‘stable’.

REMARK 4.3

§ ¢¡

Let us suppose that one of the policies is the compromise revision and that  is inconsistent

  ¦

§ ¤

  

because of an input . Assume  , and suppose that (for some ) we

want to retain ¦ . Assume that we are using the insistent input policy. Then we maintain

§ § §

§ §

¦ £¡ ¥ £¡ © ¥ ¥

and look for a such that is consistent. Let assume now that does not

§

¦ ¦ ¥  ¦ 

§

¡  ¥  

prove . We could decide to keep by defining a ¥ . The label tells us

¦ 

§

  that is derived from  . In a revision which accepts also the compromise revision (which we do not take into consideration in this general model for Controlled Revision), a further consideration would then be:

5. Whether ¦ is an earlier compromise.

5 The algorithms for

§

¤¡

Let us suppose now that, at stage  , is inconsistent. We want the algorithm to select a

¥ §   3 ¥ %$ $ $ 

©

§ § §

§

¥ ¥ £¡ ¥  

¢¡

among the possible subsets of , where , and

¥ &$ $ $ ¤£

©

§ ¥

(recall that sets of labelled formulas are determined by option sets of name

¥ &$ $ $ ¤£ 

©

§ §

§¦ 

¥ ¥  ¥

labels ). We use as the set of indexes of . Clearly, the data in each



%$ $ $ 

§

 + ¥ ¥

are only some of the formulas in : we then call the set of indexes in , where

§

§

§¦ 

¥ £¨¥ ¥  . Among the there are sets containing data already in and a set with the new input. The algorithm decides to accept or to refuse the new input, comparing with the other available options. It is not an a priori principle, as in AGM, where the new information

always receives the highest priority.

The first step of the Controlled Revision algorithm sums up the number of stages the in

§ §



¥ ¥ ©

each were in. We call this number the persistence of and we use :

¥



©  ©

7It might be objected that such a policy can lead to counterintuitive results. Consider, for example, the case where

¤ 

¤ is active in the database since the initial stage and it has always been active until a new input is obtained

¤ 

at stage . Why should we give higher priority to ¤ than to ? A possible solution is to assign a reliability



  degree  to each formula, i.e., a real number from the interval , as in [27].

14 Controlled Revision - An algorithmic approach for belief revision

¦ ¦ ¦ ¦

  !

¢

¥ £ ¥ ¡ ¥

In the above example, the persistence of ¥ is , is , is while contains the new

¦ 

input (so we can assume that, in virtue of its ‘hypothetical’ persistence is just © ).

¥ &$ $ $  

©

§

¡  ¥

Let be , the algorithm rejects the ¡ such that:

¥£¢¥¤§¦ §

 

©¨

© ©

¡

In the above example, our selection algorithm would then choose to reject the new input. §

 8

© ¥

¡

If there is not a unique , the algorithm rejects the set with the lowest cardinality. ¡ If two options have also the same cardinality, the algorithm would then look into the history of the data of each option and would count the number of changes +/-, keeping the option with the least number of changes. In case two or more options are equivalent, reject one of

§ 9 ¥ the , or use the tree-revision, which will be introduced in the next section.

Summarizing:

§ ¢ 



§ § §

 

1. When at stage  a database receives a new input , add to with labels

§ ¤¡

and obtain  .





§

¤

§ §

£¡ ¥ £¡  2. If  , then moves to providing the new input of appropriate labels and

updating the history labels of all the other data in the database.

¤

§

¡ ¥ 3. If the system verifies that  the selection algorithm is activated.

Algorithm of Controlled Revision

§ ¥

1. The option sets of labels are determined as defined in Definition 3.10, and from these





§ §

¤

§

¥ £¡ ©  ¥ ¥

the sets of labelled formulas such that .

§



¥ ©

2. For each calculate .

¥£¢¥¤ ¦ §

§ §

 

©¨

© ¥ ¥ ©

¡

3. Among all the de-activate the such that .

¡

§

¥ ¤ ¥ ¢¡

4. If there is not a unique satisfying point , refuse the set such that the index set has

 

the lowest cardinality ¥ .

§

¥ ¥ 5. If two or more have the same cardinality, reject the whose formulas have the great-

est number of changes +/-.

§

¥ ¢¡

6. If again there is not a unique such that the above condition is satisfied, reject one of

§ ¥ the , or use the tree-revision.

Once the consistency is restored, the system updates the history labels and, finally, rein-



§ states as many data as possible in  as described in the following reinstatement algo- rithm.

Reinstatement algorithm



&$ $ $  % ! % !

©

§

)

 

Assume we have  . At each stage , , we call the set of the



§ §

) 

formulas which was rejected from  . Let be an input. If it is consistent with ,

   

¥

§ § § §

 then  is defined and . In this case the formulas active at are

8Which is the economic principle that motivates most of the postulates in AGM theory. 9With our algorithm the arbitrary rejection of a set appears only in few cases or at the initial stages of a database. This was one of the most serious critique to TMS where, in case of inconsistency, the process of dependency- directed backtracking could not exhibit a satisfactory control in choosing the data to de-activate. Depending on the applications, the last step can be substituted with the non-insistent policy.

Controlled Revision - An algorithmic approach for belief revision 15



§ §

§

¡

 together with the formulas active at  . If it inconsistent (i.e. is inconsistent), let

be the selection to be rejected (we do not care what selection algorithm was used). Recall that

 &$ $ $

 §  § ¥

the selection algorithm chooses one of the sets ¥ ). We now have a consistent set

 

¡



§ § §

§

©

which we call comprised of and the set of active wffs at  . There is

 &$&$%$& 

§ §

now hope to reinstate some previously rejected formulas from . Now we

§ § can define our algorithm to reinstate. We first look at and we order the elements of

according to some principle taking into account their history, for example according to when §

they came as input. Now we try to reinstate the elements of one by one after checking for



§ ¡

consistency (by forming  as if they were new input to temporary add to the database to

 

§

test for consistency). Then we proceed with the data in and so on until . When we

 

¡

§ § finish we get a consistent set which we will use to define 

6 The tree-revision

§

¥

The tree-revision is activated when the algorithm cannot find a unique ¡ satisfying the

¡ conditions at steps ¤ , and of the selection algorithm. Imposing a linear ordering relation

on formulas (common in the literature of belief revision) sounds too strong an assumption.

§ ¥ Tree-revision is an attractive proposal for a revision which, in case of ‘equivalent’ , waits for incoming inputs. Every option is then associated with a possible revision, waiting for new

inputs. The elements of such a hierarchical structure are called nodes. The root node (the  initial database  ) is the node occupying the top level of the hierarchy. Underneath the root, there can be multiple children. Every child may in turn be the parent of other children, thus branching like a tree. Nodes without children are leaf nodes. No node can have more than

one parent.

 #&$%$&$ 

§

  

We have previously defined a database under revision as a sequence like  .

Tree-revision makes the structure more complex, since from a step  multiple possible

revisions can be considered, and we want to provide each node with its own name. The

£

¢¡ 

initial database is  , where is the empty sequence. The databases after the -th input



¡¥¤¦¤¦¤

§



% % 

are indexed by sequences of  numbers . When comes in, if there are several

 

¡§¤¦¤¦¤ ¡§¤¦¤¦¤

 

% % % % 

options, they become  , , etc. Thus, if in our notation there is always one

    

¨¡      

! !

option, the sequence becomes , ! , , , ... which, in our old notation we

§ $ $ $ ¥

 

© ©

¤¦¤¦¤

§

   © 

would write ! . To make the two notations similar, let denote . For



¢

    

¢

¤

¤ ¤

   

example, ! becomes , or even if there is no possibility of confusion.



 

Suppose that at the first stage we receive two inputs: and  . They are consistent,

     §  ¥ §

© ©

 



     

so our knowledge is then  . At the next step, we

§ 



  ¥

receive a new piece of evidence . If we choose the insistent input policy, we

 ¥

have two possible revised sets which consistently include  :

¥     ¥   

 

¡ ¡ ¡

    ¥    ¥ 

The data in  have the same history, so the algorithm cannot select a unique solution

to resolve inconsistency. In case of equivalent revision options, we can either reject one of

§ ¡

the candidates, or we can wait for the next one or two inputs (to avoid complexity). If 

¦%$ $ $ 

§

¥) ¤¡

is inconsistent and there are ¥ option sets of labels to revise , we represent the

¡

 #%$ $ $  5 ! %

¤ ¤ ¤ ¤

§ § § § 

  )  )

databases options  as branches of a tree. Each ( )

!3! %

©

¥ ¥ corresponds to a ¥ ( ), i.e., reject the wffs in . We recall that each contains a set of labels of wffs to de-activate in order to restore consistency. We call these wffs option formulas. When a new input comes, it is added to each branch, and the revision machinery 16 Controlled Revision - An algorithmic approach for belief revision

starts checking for inconsistencies. 

We explain this using the above example. Our set  is:

 

 ¥ §   §   

© ©

 

      

§ ¢   





 ¡   ¥ Suppose that comes as a new input and we want to accept it. is in- consistent and we have two options to restore consistency (maintaining the new information).

We assign a new name to the input, we make it active at stage ¡ and we consider the following

two options:

  

  ¥ §  §  $  §  §  $   §  § $  

© © © © ©

  

¤

  

   ©¢¡   £¡   £¡   ¥

  ¥ §  §  $  §  §  $   §  § $  

© ©

  

¤

   £¡ ¡   ©¢¡ ¡   £¡ ¡   ¥

   

¤ ¤

 ¡

and  mean that the input added at stage caused inconsistency and the algorithm 

could not decide a priority over the two option sets. So at ¡ we have two possible revised

      



¤ ¤ ¤



  

database, one is  and the other is . In the first one is rejected, while in

%  %



 

we reject . and  are option formulas. We keep record of the branching

$

© ¢¡

also in the history labels: © means the formula is rejected in the first branch of the second

$

© £¡

step. Similarly,  means the formula is active in the first branch of the second step. We

¦

¤

§ )

also need to specify that a database  actively prove when all the formulas involved in

% $

the proof are active at  . Let us consider the following example to explain these notions.

EXAMPLE 6.1 Let us recall the scenario in [26]:

Assume that a suspect tells you that he went to the beach for swimming and assume that you have observed that the sun was shining. Further, you firmly believe that going to the beach for swimming when the sun is shining implies a sun tan. If you then discover that the suspect is not tanned, there is an inconsistency to resolve. ([26], p.57)

Let:

B = “going to the beach for swimming” S = “the sun is shining” T = “sun tan”

After the first step, our database would then be:

  !



 ¥   §     

  

      

In his example, Nebel has already in mind a prioritized base ¦ :

¦£ ¥  §   

 

¦  ¥  



¦ ! ¥  

 

¦ ¥ ¦£ ¦  ¦ !

 



¦ 

¥ Then, revising by  we will infer that the suspect is a liar. Controlled Revision - An algorithmic approach for belief revision 17

But do the prioritized bases approaches provide a mechanism to modify the ordering after a revision process has been run? Unfortunately, not. As noticed also in [7], the effect of a revision operator may change over time. Friedman and Halpern also claim that a revision operator can be defined as a function only if the language is “rich enough to express relative degrees of strength in beliefs”. That is something we can certainly agree on, but how can the degrees of strength in our beliefs be decided? In this example we are a detective who wants to find the person guilty of a terrible murder. The database we are considering is only a piece of information regarding one of the suspects (Mr X) and we are assessing his alibi. Labelling

the data as usual:





 ¥ §  §  § 

©



    



§  § 

©



  

!

§  § 

©



 

Then:

 

 

 § §  §  § § 

© ©

 

¤

  !

    

 

§ § §  §  § § §  §  §

© © ©

  

¤

     

 

§ ¢  



¥

When we observe that Mr X is not tanned (  ), we have:

  !

!  

§ § §  § § ¢  § § §  § §



© © ©

   

¤

£¡      ¥

  !

  !

¥   ¥   ¥  

  

 ¥  ¥ 

In order to restore consistency, the option sets are ¥ , ,



¢ ¥ ¢   and, finally, ¥ . Since we have no strong evidence to the guilt of Mr X, we decide

to keep our mind open and search for other information. Formally, this is equivalent to say

 

§£¢  





¡

 ¥ 

that in order to accept in (we are in the insistent input policy) we are

 § 

 prepared to remove  , or , but we still do not know which one of these. We

accept the new input, provide it with a new name and update the labels of the other formulas

§  making explicit the options. We can think of this as three branches departing from  , each

branch being one different option:



 

¥ §  §  $  §  

© ©



¤

   ©¢¡  



§  §  $ 

© ©



 £¡  

!

 §  §  $

© ©



 £¡ 



§  § $  

©



£¡  ¥



 

¥ §  §  $  §  

©



¤

   £¡ ¡  



§  §  $ 

©



 ©¢¡ ¡  

!

§  §  $ 

©



 £¡ ¡ 



  §  § $



¥ £¡ ¡ 



 ! ¥   §  $  §  

©



¤

   £¡ ¤  



§  §  $ 

©



 £¡ ¤  

!

§  §  $ 

©



 ©¢¡ ¤ 



  §  § $



¥ £¡ ¤  18 Controlled Revision - An algorithmic approach for belief revision

After further careful searches, we are persuaded! by some authoritative source that the sun





was really shining the day of the murder. Since  is already in the database, we do not

need a new name for it. But we want to mark that we got again the same information as an

input at the step of the database, so we enter into the labels. This provides a way to deal with iterated inputs of the same information . Intuitively, the repeated observation of the same fact increases the reliability of the fact itself. Also, when an inconsistency between two formulas (which have been active for the same length of time) arises, the one with more occurrences of in its history will be retained. This is called repeated input policy and is a special feature of our model. It shows also that even a “beyond controversy” [30] axiom such

as the vacuity postulate may not catch some important aspects of a revision process. The new

! !

§

¤

¦

evidence about the fact that the sun was shining leads to close the  branch (because

! !

¤

of the re-activation of the option formula in  :



 !  ¥ §  §  $  $  § 

© © ©



¤

   ©¢¡ ©¦¤  



§  §  $  $ 

© © ©



 £¡ ¥¤  

!

$  §  §  $ 

© © ©



¤   £¡



§  § $  $  

© ©



£¡ ¥¤  ¥



! 

¥ §  §  $  $  §  

©



¤

   £¡ ¡ ¥¤ ¡  



§  §  $  $ 

©



 ©¢¡ ¡ ©¦¤ ¡  

!

§   §  $  $

©



 £¡ ¡ ¤ ¡ 



§  § $  $  



£¡ ¡ ¥¤ ¡  ¥



! ! ¥   §  $  $  §  

©



¤

   £¡ ¤ ¥¤ ¤  



§  §  $  $ 

©



 £¡ ¤ ¥¤ ¤  

!

 §  §  $  $

©



 ©¢¡ ¤ ¤ ¤ 



§  § $  $   ¥



£¡ ¤ ¥¤ ¤  ¥ ¦

Suppose now that in talking to Mr X’s doctor we discover that he is affected with a rare

 ¡ !

skin  and, after some research, we also discover that this kind of illness prevents him

§ ¢  ¡  § §  § 

¢ ¢



  from tanning . Let:

I = “skin illness”



 ¡ §£¢  ¡ 

¢ ¢



!



 does not give inconsistency. But, when we add to our knowledge base

§ §  §  

 

¢ ¢

¤ ¤

¥ £¡ £¡

 , and become respectively :



¥ §  §  $  $  §  

© © ©

 

¢

¤

¡

  ©¢¡ ©¦¤   



§  §  $  $ 

© © ©



 £¡ ¥¤  

!

§  §  $  $ 

© © ©



 £¡ ¤ 



  §  § $  $

© ©



¥ £¡ ¥¤ 



$ ¡  §  §

©

!

 

§ ¢   ¡  § §  §   

¢ ¢



¤

  ¥



$  §  ¥ §  §  $  $  

©

 

¢

¤

¡

¡      £¡ ¡ ¥¤ ¡ 



$  §  §  $  $ 

©



¡    ©¢¡ ¡ ©¦¤ ¡ 

Controlled Revision - An algorithmic approach for belief revision 19

!

$  §  §  $  $ 

©



¤ ¡   £¡ ¡ ¡ 



§  § $  $  $  



£¡ ¡ ¥¤ ¡  ¡  ¥



§  § $ ¡ 

!

 ¡ 

  ¥ § ¢   ¡  § §  § 

¢ ¢



¤

¥ ¦  



¢

¤ £¡

 is inconsistent:



 §  §   §£¢ 

¢

 !

¢

¤

¤



¡

¥   



 § § ¢   

¢

 !

¢

¤

¤

£¡  ¥



 

  §  

 

  

The Controlled Revision algorithm rejects  and reinstates



¢

¤ ¤¡

determining the closure of  . The new inputs will be then added only to the remaining



¤



¢

¤

¤

£¡   . The new database will be obtained providing the last input with appropriate labels and updating the other history labels. Whenever the Controlled Revision algorithm cannot find a unique solution for the incon-

sistency in  , tree-revision is an alternative to random choice rejection (as in TMS). Whether

to opt for one of the option sets or to wait for new inputs, depends on factors such as the number of ¥ . This kind of revision seems to capture a typical feature of practical reasoning according to which, when undecided, we keep our mind open while waiting for new informa- tion. In particular, the implicit risk in refusing any of the sets responsible for inconsistency is reflected in the idea of iterated revision. If previous revisions have some influence on future states of belief, rejecting the wrong set might compromise future revisions. This nicely cap- tures Gaardenfors¨ pithy observation that information costs. Therefore, we should not only give up the least possible information, but it should also be the right information to give up!

7 Conclusion We have proposed an algorithmic approach to the revision of inconsistent databases. We have

shown the special role the history of past revisions plays in devising a specialized algorithm

§ for  . In future work we explore the idea that aspects of non-monotonicity arises from the history of each formula and from the history of past revisions. We bring the present paper to a close with a brief explanatory sketch.

Imagine a revision policy allowing for compromise revision and reinstatement priority. At

§ §

   time  , the current database is . It is necessary to recognize the core part of , which is composed of all data except the compromise and reinstated wffs. We assume that

in cases of inconsistency in the face of a new input, these are the first to be rejected. Write

 ¡

§ §

¤ ¤

      ©

 if logically follows from . Write if (i.e. the are the non-monotonic consequences of  ).

We now check to see whether the axioms for the non-monotonic consequence relation hold.

§

§

 ¤¡

Assume input ¦ . is updated to , in the manner discussed in the paper. The core of

6¨ ¨

§

§

¡ ¡ ¡

  ¦ 

 is . If we opt for input insistence, we get . Clearly we may have but

£¢¨  ¡  ¡  ¡



§ ¤

§

¡    ¦  ©  ©

 hence , but . However, if , this means that ,

¥   ¡

 ©   ©  ¤  ¤

hence that  is consistent. Whereupon ( , and for all iff

 ¡

 © ¤  (which is restricted monotonicity). This is reminiscent of the Gardenfors¨ and

Makinson [24] observation that given a theory ¥ we can get non-monotonic consequence by

 ¡§¦  ¡¨¦

¤

 ¥ ©   iff . Our conviction is that the dynamic dimension has been disregarded not only in belief re- vision, but also in other areas of knowledge representation. For example, one of the actual 20 Controlled Revision - An algorithmic approach for belief revision major problems in artificial intelligence is the aggregation of several databases, i.e., the com- bination of pieces of knowledge from different sources. This is a distinct issue, but related, to belief revision. In belief revision, we face the question of accepting or rejecting a new information which can contradict our knowledge base. In the case of aggregation, we want to merge information from different sources. The difference is that no one of these bases is the representation of our actual theory, but we need to fuse all the bases to obtain a unique theory (we may think of building an expert system from a group of human experts, for ex- ample). Furthermore, the final merged base is not necessarily one of the initial bases. It is not possible to define a fusion operator from a revision one without specifying how to com- bine the individual preferences. To collect information from different sources often results in opposite information from equally reliable sources. Our proposal is to develop an approach which, in this case, waits for more information. The next inputs will inform us on which source provides a better picture of the world, so that we can update our confidence degree on that expert. We leave further investigation of these aspects and their application areas for a future paper [12].

8 Acknowledgments We want to thank David Makinson, Peter McBurney and Odinaldo Rodrigues who read a previous version of the paper for their valuable comments and questions.

References [1] C. Alchourron,´ P. Gardenfors¨ and D. Makinson. On the logic of theory change: partial meet contraction and revision functions, Journal of Symbolic Logic, 50, 510–530, 1985. [2] C. Boutilier, N. Friedman and J. Y. Halpern. Belief revision with unreliable observations. In AAAI 98, Madison, WI, pp. 127–134, 1998. [3] A. Darwiche and I. Pearl. On the logic of iterated belief revision, Artificial Intelligence, 89, 1–29, 1997. [4] J. Doyle. A truth maintenance system, Artificial Intelligence, 12, 231–272, 1979. [5] J. Doyle. Reason maintenance and belief revision – foundations vs. coherence theories. In Belief Revision, P. Gardenfors,¨ ed. pp. 29–51. Cambridge University Press, 1992. [6] N. Friedman and J. Y. Halpern. A knowledge-based framework for belief change, Part II: revision and update. In Principles of Knowledge Representation and Reasoning: Proc. Fourth International Conference (KR ’94), J. Doyle, E. Sandewall, and P. Torasso, eds. pp. 190–201. San Francisco, CA: Morgan Kaufmann, 1994. [7] N. Friedman and J. Y. Halpern. Belief revision: a critique. In Proceedings of the Fifth International Conference on Principles of Knowledge Representation and Reasoning (KR ’96), pp. 421–431. Cambridge, MA 1996. [8] D. M. Gabbay. Labelled Deductive Systems, Vol. 1, Basic Theory, Oxford: Oxford University Press, 1996. [9] D. M. Gabbay. Compromise update and revision: a position paper. In Dynamic Worlds – From the Frame Problem to Knowledge Management, R. Pareschi and B. Fronhofer¨ , eds. pp. 111-148, Applied Logic Series (12). Dordrecht and Boston: Kluwer Academic Publishers, 1999. [10] D. M. Gabbay, O. Rodrigues and A. Russo. Revision by translation. In Information, Uncertainty and Fusion, B. Bouchon-Meunier, R.R. Yager and L.A. Zadeh, eds. pp. 3–31. Dordrecht and Boston: Kluwer Academic Publishers, 2000. [11] D. M. Gabbay. Dynamics of Practical Reasoning. In Advances in Modal Logic, Vol 2, M. Zakharyaschev, K. Segerberg, M. de Rijke and H. Wansing, eds. pp. 179–224. CSLI Publications, 2001. [12] D.M. Gabbay, G. Pigozzi and J. Woods. Controlled Dynamic Fusion. To appear 2002. [13] D.M. Gabbay and J. Woods. Agenda Relevance: A Study in Formal Pragmatics, Volume 1 of The Practical Logic of Cognitive Systems. Amsterdam: North-Holland, 2003, to appear. [14] D. M. Gabbay and J. Woods. The Reach of Abduction: Insight and Trial, Volume 2 of The Practical Logic of Cognitive Systems, Amsterdam: North-Holland, 2003, to appear. Controlled Revision - An algorithmic approach for belief revision 21

[15] P. Gardenfors.¨ Knowledge in Flux: Modeling the Dynamics of Epistemic States, Bradford Books, The MIT Press, Cambridge Massachusetts, 1988. [16] P. Gardenfors.¨ The dynamics of belief systems: Foundations vs. coherence theories. In Revue Internationale de Philosophie, 44, 42–46, 1990. [17] P. Gardenfors¨ and D. Makinson. Revisions of knowledge systems using epistemic entrenchment. In Proceedings of the Second Conference on Theoretical Aspect of Reasoning About Knowledge, pp. 83-96, 1988. [18] P. Gardenfors¨ and H. Rott. Belief revision. In Handbook of Logic in Artificial Intelligence and Logic Program- ming, Vol. 4, D. M. Gabbay, C. J. Hogger and J. A. Robinson, eds. Oxford: Oxford University Press, 1995. [19] S. O. Hansson. In defense of base contraction. Synthese, 91, 239–245, 1992. [20] S. O. Hansson. Semi-revision. Journal of Applied Non-Classical Logics, 7, 151-175, 1997. [21] H. Katsuno and A. Mendelzon. Propositional knowledge base revision and minimal change, Artificial Intelli- gence, 52, 263–294, 1991. [22] H. Katsuno and A. Mendelzon. On the difference between updating a knowledge base and revising it. In Belief Revision, P. Gardenfors,¨ ed. pp. 183–203. Cambridge University Press, 1992. [23] I. Levi. For the Sake of the Argument, New York: Cambridge University Press, 1996. [24] D. Makinson and P. Gardenfors.¨ Relations between the logic of theory change and nonmonotonic logic. In The Logic of Theory Change, A. Fuhrmann and M. Morreau, eds. pp. 185–205. Volume 465 of Lecture Notes in Artificial Intelligence. Berlin: Springer-Verlag, 1991. [25] D. Makinson. General patterns in nonmonotonic reasoning. In Handbook of Logic in Artificial Intelligence and Logic Programming, Vol. 3, Nonmonotonic Reasoning and Uncertain Reasoning, D. M. Gabbay, C. J. Hogger and J. A. Robinson, eds. pp. 335–110. Oxford: Oxford University Press, 1994. [26] B. Nebel. Syntax-based approaches to belief revision. In Belief Revision, P. Gardenfors,¨ ed. pp. 52–88. Cam- bridge, MA: Cambridge University Press, 1992. [27] G. Pigozzi. La Revisione Controllata. Un nuovo modello per Belief Revision, Ph.D. dissertation, University of Genova, 2001. [28] W. V. Quine and J. S. Ullian. The Web of Belief. New York: Random House, 1970. [29] R. Reiter. A theory of diagnosis from first principles. Artificial Intelligence, 32, 57-95, 1987. [30] H. Rott. Conditionals and theory change: Revision, expansions, and additions, Synthese, 81, 91–113, 1989. [31] Y. Shoham and S. B. Cousins. Logics of mental attitudes in AI. In Foundations of Knowledge Representation and Reasoning, G. Lakemeyer and B. Nebel, eds. pp. 296–309. Vol 810 of Lecture Notes in Computer Science (Subseries LNAI). Berlin: Springer-Verlag, 1994. [32] R. Wassermann. An algorithm for belief revision. In Proceedings of the 7th International Conference on Prin- ciples of Knowledge Representation and Reasoning (KR2000). Los Gatos, CA: Morgan Kaufmann, 2000. [33] J. Woods. Paradox and Paraconsistency: Conflict Resolution in the Abstract Sciences. Cambridge, MA: Cam- bridge University Press, 2002. Received November 2001