Conquering the Meredith Single Axiom
Total Page:16
File Type:pdf, Size:1020Kb
Conquering the Meredith Single Axiom Larry Wos Mathematics and Computer Science Division Argonne National Lab oratory Argonne IL email wosmcsanlgov May Abstract For more than three and onehalf decades b eginning in the early s a heavy emphasis on pro of nding has b een a key comp onent of the Argonne paradigm whose use has directly led to signicant advances in automated reasoning and imp ortant con tributions to mathematics and logic The theorems that have served well range from the trivial to the deep even including some that corresp onded to op en questions Often the paradigm asks for a theorem whose pro of is in hand but that cannot b e obtained in a fully automated manner by the program in use The theorem whose hypothesis consists solely of the Meredith single axiom for twovalued sentential or prop ositional calculus and whose conclusion is the Lukasiewicz threeaxiom system for that area of formal logic was just such a theorem Featured in this article is the metho dology that enabled the program OTTER to nd the rst fully automated pro of of the cited theo rem a pro of with the intriguing prop erty that none of its steps contains a term of the form nnt for any term t As evidence of the p ower of the new metho dology the article also discusses OTTERs success in obtaining the rst known pro of of a theorem concerning a single axiom of Lukasiewicz Keywordsautomated reasoning double negation Meredith single axiom prop ositional calculus twovalued sentential calculus This work was supp orted by the Mathematical Information and Computational Sciences Division subprogram of the Oce of Advanced Scientic Computing Research US Department of Energy under Contract WEng The Original Problem For many years the Meredith single axiom for twovalued sentential or prop ositional calculus had successfully resisted a fully automated pro of Following is Merediths axiom expressed in clause notation Piiiiixyinznuzviivxiux Although the automated reasoning program OTTER McCune was sp eedily able to pro of check what amounts to the original step pro of of the Meredith single axiom no combination of strategies and parameter settings yielded a pro of that was not strongly guided by knowledge of that original pro of Note that pro of checking esp ecially when condensed detachment is the inference rule to b e used is far far easier than pro of nding their natures dier sharply Also of note Merediths original pro of in contrast to the cited step pro of was not based strictly on condensed detachment indeed his step pro of relies in part up on substitution and in part on detachment More generally b e warned that pro of length and pro of itself in the literature can b e misleading as the preceding illustrates The target chosen by Meredith and featured in the research discussed here was the Lukasiewicz threeaxiom system Lukasiewicz expressed in clause notation Piixyiiyzixz Piinxxx Pixinxy This article fo cuses on the conquering of the axiomon the formulation of a metho d ology that yields a fully automated pro of that b egins with the Meredith single axiom and completes with the deduction of the three Lukasiewicz axioms The metho dology relies in no way on knowledge of the pro of b eing sought Moreover as required by any metho dology that is to b e resp ected its use led to a distinct and startling success to b e presented here namely completion of a pro of that the Lukasiewicz letter axiom is sucient for the study of all of twovalued sentential or prop ositional calculus Before the development of the metho dology featured in this article no pro of of this single axiom was known The factors that led to the conquest and replaced the years of failure are detailed Also discussed in this article is the topic of shortening pro ofs and the resultant algo rithm I was able to formulate for making such attempts The algorithm as explained in Section is the s complement to the metho dology featured here A Burning Question Esp ecially for the individual unfamiliar with automated reasoning a natural and even burning question immediately arises Why sp end time and eort trying to nd a means for a program to prove a theorem whose pro of is readily available From a historical viewp oint the answer rests with the continued practice of the Argonne group namely rst identify a theorem whose pro of is out of reach of our current program and then wrestle with that theorem until a means is found to bring its pro of within range From the viewp oint of automated reasoning the general principle asserts that progress is quite likely to o ccur when the arsenal of weapons is augmented to p ermit the pro of of a theorem that had resisted automation Whether the theorem has already b een proved or simply conjectured to b e true is not the crucial element although the latter case pro duces more excitement One danger always exists The new approach may b e consciously or unconsciously tailored to the theorem under consideration To b e of interest and to oer the strong p ossibility that an advance has o ccurred such tailoring must b e absent Therefore the supp osed advance must b e put to the test of applying it to similar theorems or even b etter to some not so similar A number of the weapons now oered by a program such as OTTER can trace their birth to the study of a theorem whose pro of was already in hand The Key Factors Four factors app ear to account for the success to b e rep orted here in some detail First directly at the suggestion of my colleague Branden Fitelson the heat parameter was assigned the value Second a series of lemmas whose choice is in no way based on the theorem under attack was included as targets that when proved were then added to the initial set of supp ort for the next run and the pro cess was rep eated until a pro of of the desired theorem was obtained Third at some p oint all of the pro of steps of the alreadyproved lemmas even those thought to b e irrelevant to the task were adjoined as resonators Fourthand indeed counterintuitivethe goal of nding a pro of was replaced by the goal of nding a pro of totally free of double negation From what I know and from what Fitelson has found prior to the study rep orted here no such pro of for the Meredith single axiom had b een oered Before turning to the details of the metho dology I rst oer a fuller treatment of each of the four factors Turning up the Heat The rst factor fo cuses on the use of the hot list strategy Wosb My usual approach is to set the input heat parameter to a value no greater than When the theorem under attack relies on a single axiom however a value of or even greater is recommended Indeed as my colleague Fitelson p ointed out if you examine a pro of of a theorem of the type under discussion you often nd a large number of its steps have as a hypotheses the single axiom He was studying the use of condensed detachment with a heavy emphasis on the step pro of of the Meredith single axiom Moreover you also frequently nd that a sequence of such steps o ccurs consecutively with the second based on the rst the third on the second and the like The two observations taken together suggested to him that an unusually high value assigned to the input heat parameter coupled with the placement of the single axiom in the initial hot list might enable OTTER to nd such tightly connected sequences of steps For example if a retained clause C has heat if the input value is and if the initial hot list contains b oth the clause corresp onding to the single axiom and that corresp onding to condensed detachment then b efore leaving the scene C would b e used with condensed detachment and the single axiom to yield C Similarly C would b e used to deduce C assuming that C was retained and the pro cess could continue until C was deduced and retained I have found that Fitelsons analysis exhibits excellent insight Therefore I recommend assigning a value of or greater to the input heat parameter when the theorem under attack fo cuses on a single axiom Adding to the Initial Set of Supp ort The second factor concerns amending the initial set of supp ort for a subsequent run You might as I did puzzle over the value of such a move if the resonance strategy were already in use and the resonators corresp onding to the set of target lemmas were already present Indeed on the surface if one of the target lemmas was deduced say in the rst run and its resonator was already present then it would quite quickly b ecome the fo cus of attention to initiate applications of the inference rules in use Therefore how could the programs attack b e enhanced in the socalled second run by having such a lemma included in the input set of supp ort In particular the two approaches would seem to b e essentially equivalent As Fitelson and I hazarded in a phone conversation here is the probable dierence by example If the target lemma is deduced and retained rather than b eing present at the b eginning of the run at say clause and used to deduce a complex clause that is retained and given say the number then the clause numbered might never b e chosen as the fo cus of attention for inferencerule initiation Even with the presence of the ratio strategy and a value of assigned to the pick given ratio OTTER might never or almost never choose clause as the fo cus of attention On the other hand in a run in which the clause corresp onding to the target lemma numbered in the preceding comments is included in the initial set of supp ort the clause that would have b een numbered might now b e numbered or less and hence will b e chosen