Conquering the Meredith Single Axiom

Larry Wos

Mathematics and Computer Science Division

Argonne National Lab oratory

Argonne IL

email wosmcsanlgov

May

Abstract

For more than three and onehalf decades b eginning in the early s a heavy

emphasis on pro of nding has b een a key comp onent of the Argonne paradigm whose

use has directly led to signicant advances in automated reasoning and imp ortant con

tributions to mathematics and The theorems that have served well range from

the trivial to the deep even including some that corresp onded to op en questions Often

the paradigm asks for a theorem whose pro of is in hand but that cannot b e obtained

in a fully automated manner by the program in use The theorem whose hypothesis

consists solely of the Meredith single axiom for twovalued sentential or prop ositional

calculus and whose conclusion is the Lukasiewicz threeaxiom system for that area of

formal logic was just such a theorem Featured in this article is the metho dology that

enabled the program OTTER to nd the rst fully automated pro of of the cited theo

rem a pro of with the intriguing prop erty that none of its steps contains a term of the

form nnt for any term t As evidence of the p ower of the new metho dology the

article also discusses OTTERs success in obtaining the rst known pro of of a theorem

concerning a single axiom of Lukasiewicz

Keywordsautomated reasoning double negation Meredith single axiom prop ositional

calculus twovalued sentential calculus



This work was supp orted by the Mathematical Information and Computational Sciences Division

subprogram of the Oce of Advanced Scientic Computing Research US Department of Energy under

Contract WEng

The Original Problem

For many years the Meredith single axiom for twovalued sentential or prop ositional

calculus had successfully resisted a fully automated pro of

Following is Merediths axiom expressed in clause notation

Piiiiixyinznuzviivxiux

Although the automated reasoning program OTTER McCune was sp eedily able to

pro of check what amounts to the original step pro of of the Meredith single axiom no

combination of strategies and parameter settings yielded a pro of that was not strongly

guided by knowledge of that original pro of Note that pro of checking esp ecially when

condensed detachment is the rule to b e used is far far easier than pro of nding

their natures dier sharply Also of note Merediths original pro of in contrast to the cited

step pro of was not based strictly on condensed detachment indeed his step pro of

relies in part up on and in part on detachment More generally b e warned

that pro of length and pro of itself in the literature can b e misleading as the preceding

illustrates The target chosen by Meredith and featured in the research discussed here

was the Lukasiewicz threeaxiom system

Lukasiewicz expressed in clause notation

Piixyiiyzixz

Piinxxx

Pixinxy

This article fo cuses on the conquering of the axiomon the formulation of a metho d

ology that yields a fully automated pro of that b egins with the Meredith single axiom and

completes with the deduction of the three Lukasiewicz axioms The metho dology relies in

no way on knowledge of the pro of b eing sought Moreover as required by any metho dology

that is to b e resp ected its use led to a distinct and startling success to b e presented here

namely completion of a pro of that the Lukasiewicz letter axiom is sucient for the study

of all of twovalued sentential or prop ositional calculus Before the development of the

metho dology featured in this article no pro of of this single axiom was known The factors

that led to the conquest and replaced the years of failure are detailed

Also discussed in this article is the topic of shortening pro ofs and the resultant algo

rithm I was able to formulate for making such attempts The algorithm as explained in

Section is the s complement to the metho dology featured here

A Burning Question

Esp ecially for the individual unfamiliar with automated reasoning a natural and even

burning question immediately arises Why sp end time and eort trying to nd a means for

a program to prove a theorem whose pro of is readily available From a historical viewp oint

the answer rests with the continued practice of the Argonne group namely rst identify

a theorem whose pro of is out of reach of our current program and then wrestle with that

theorem until a means is found to bring its pro of within range From the viewp oint of

automated reasoning the general principle asserts that progress is quite likely to o ccur

when the arsenal of weapons is augmented to p ermit the pro of of a theorem that had

resisted automation Whether the theorem has already b een proved or simply conjectured

to b e true is not the crucial element although the latter case pro duces more excitement

One danger always exists The new approach may b e consciously or unconsciously

tailored to the theorem under consideration To b e of interest and to oer the strong

p ossibility that an advance has o ccurred such tailoring must b e absent Therefore the

supp osed advance must b e put to the test of applying it to similar theorems or even b etter

to some not so similar A number of the weapons now oered by a program such as OTTER

can trace their birth to the study of a theorem whose pro of was already in hand

The Key Factors

Four factors app ear to account for the success to b e rep orted here in some detail First

directly at the suggestion of my colleague Branden Fitelson the heat parameter was assigned

the value Second a series of lemmas whose choice is in no way based on the theorem

under attack was included as targets that when proved were then added to the initial set of

supp ort for the next run and the pro cess was rep eated until a pro of of the desired theorem

was obtained Third at some p oint all of the pro of steps of the alreadyproved lemmas

even those thought to b e irrelevant to the task were adjoined as resonators Fourthand

indeed counterintuitivethe goal of nding a pro of was replaced by the goal of nding a

pro of totally free of double negation From what I know and from what Fitelson has found

prior to the study rep orted here no such pro of for the Meredith single axiom had b een

oered

Before turning to the details of the metho dology I rst oer a fuller treatment of each

of the four factors

Turning up the Heat

The rst factor fo cuses on the use of the hot list strategy Wosb My usual approach

is to set the input heat parameter to a value no greater than When the theorem under

attack relies on a single axiom however a value of or even greater is recommended

Indeed as my colleague Fitelson p ointed out if you examine a pro of of a theorem of the

type under discussion you often nd a large number of its steps have as a hypotheses the

single axiom He was studying the use of condensed detachment with a heavy emphasis

on the step pro of of the Meredith single axiom Moreover you also frequently nd that

a sequence of such steps o ccurs consecutively with the second based on the rst the third

on the second and the like The two observations taken together suggested to him that an

unusually high value assigned to the input heat parameter coupled with the placement of

the single axiom in the initial hot list might enable OTTER to nd such tightly connected

sequences of steps

For example if a retained clause C has heat if the input value is and if

the initial hot list contains b oth the clause corresp onding to the single axiom and that

corresp onding to condensed detachment then b efore leaving the scene C would b e used

with condensed detachment and the single axiom to yield C Similarly C would b e used

to deduce C assuming that C was retained and the pro cess could continue until C

was deduced and retained

I have found that Fitelsons analysis exhibits excellent insight Therefore I recommend

assigning a value of or greater to the input heat parameter when the theorem under

attack fo cuses on a single axiom

Adding to the Initial Set of Supp ort

The second factor concerns amending the initial set of supp ort for a subsequent run You

might as I did puzzle over the value of such a move if the resonance strategy were already

in use and the resonators corresp onding to the set of target lemmas were already present

Indeed on the surface if one of the target lemmas was deduced say in the rst run and its

resonator was already present then it would quite quickly b ecome the fo cus of attention to

initiate applications of the inference rules in use Therefore how could the programs attack

b e enhanced in the socalled second run by having such a lemma included in the input set

of supp ort In particular the two approaches would seem to b e essentially equivalent

As Fitelson and I hazarded in a phone conversation here is the probable dierence

by example If the target lemma is deduced and retained rather than b eing present at

the b eginning of the run at say clause and used to deduce a complex clause that is

retained and given say the number then the clause numbered might never

b e chosen as the fo cus of attention for inferencerule initiation Even with the presence of

the ratio strategy and a value of assigned to the pick given ratio OTTER might never

or almost never choose clause as the fo cus of attention

On the other hand in a run in which the clause corresp onding to the target lemma

numbered in the preceding comments is included in the initial set of supp ort the

clause that would have b een numbered might now b e numbered or less and

hence will b e chosen as the fo cus of attention far so oner In other words the search space

has b een p erturb ed and p erturb ed in a most protable fashion

The discussion also sheds some light on the dierence b etween lemma inclusion and

lemma adjunction Of particular interest is the contrast b etween the way a researcher works

and the way a program works in the context of inclusion versus adjunction A p erson will

use the lemma when convenient regardless of its socalled number in memory a program

might never use it b ecause of its b eing assigned a large conclusion number see Wosc

for further discussion of this p oint

You now might b e ready for a new strategy not oered by any program known to

me one somewhat in the spirit of learning In the context of OTTER it works this way

Place on the passive list the negations of various target lemmas that might merit pro of As

each is proved immediately add the p ositive form of the lemma to the remainder of the

initial set of supp ort Of course if you have used setinput sos rst then most likely all

of those clauses will have b een already chosen as the fo cus of attention In that case take

the p ositive form of the lemma where its negation is on the passive list and immediately

choose it as the fo cus of attention

Related to the given strategy is another strategy The crux is the immediate adjunction

to the hot list of such a proved lemma Also related is the strategy that immediately adds to

the resonators the corresp ondent of the justproved lemma In such an event the program

might b enet from then changing all of the weights of the clauses remaining on the set of

supp ort list changing the weights commensurate with the new resonator Neither of the

two justcited strategies is oered by any program with which I am familiar

As for the now key question concerning where to obtain the target lemmas for the

passive list I suggest any b o ok any pap er or any research of which you know that con

cerns a theorem related to that under study For example part of my success with the

Meredith theorem resulted from using as targets theses proved by Lukasiewicz and

numbered through in his study of his threeaxiom system for twovalued sentential

or prop ositional calculus

Adjoining Resonators

The third factor leading to the successes rep orted in this sp ecial issue concerns adding as

resonators all pro of steps of proved target lemmas The notion is that if such steps were

useful in proving a target lemma then a step of the same pattern might b e of use in reaching

the main ob jective To emphasize the p oint I am not suggesting that the actual pro of steps

might prove useful rather I am talking ab out the symbol pattern in the resonator sense

where all variables are treated as indistinguishabl e

Avoiding Double Negation

The last of the four factors is without question the most startling and therefore the one

deserving the most comment When you are asked to nd a pro of but also requested to

satisfy additional constraints intuition strongly and correctly suggests that in general the

task b ecomes more dicult For example rather than seeking any pro of you are asked to

seek one in forty or fewer applications of some given inference rule For a second example

you are asked to nd a pro of that avoids the use of some sp ecied lemma For a third

example directly p ertinent to this article you are asked to nd a pro of in which no

term in any pro of step has the form nnt for some term t A pro of within an added

given constraint may not even exist Therefore seeking a pro of that avoids the use of

double negation would naturally seem harder than completing a pro of not satisfying such

a constraint Indeed a glance at the p ertinent literature shows that the use of double

negation ab ounds

But in fact adding such a constraint app ears to have help ed to nd a pro of I con

jecture that the space of conclusions for the automated reasoning program OTTER to

browse in has b een markedly reduced in size Perhaps the presence of conclusions contain

ing nnt terms is not so loathsome in itself but p erhaps such conclusions spawn an

intractable space of retainable conclusions In other words p erhaps the density of go o d

information within the total information deduced and retained is far greater even though

the added constraint clearly guarantees that the number of acceptable pro ofs is sharply

reduced

This conjecture is in part based on my rep eated attempts with no success to obtain

a fully automated pro of of any type for the Meredith single axiom b efore adding the

constraint of avoiding double negation As a result I suggest the following aphorism at

least if you are using a reasoning program such as OTTER

 If you are meeting apparently unconquerable resistance then replace the question by

a harder question or replace the goal by one that app ears to b e more unreachable

You might wonder why I chose to fo cus sp ecically on term avoidance The main im

p etus was my success several years ago in nding a pro of in manyvalued sentential calculus

with such a constraint Chapter of Wos a pro of conjectured to b e nonexistent

Since then when studying some problem in a logic calculus I frequently have attempted

to nd a pro of free of double negation And so I made such an attempt in my most re

cent study of the Meredith single axiom completing with the deduction of the threeaxiom

system of Lukasiewicz

A Metho dology Is Born

The use of a metho dology based on the four cited factors did in fact yield a fully automated

pro of that the Meredith single axiom implies each of the three Lukasiewicz axioms L L

and L Only four runs exp eriments were required

For the rst exp eriment as is typical when I conduct a sequence of tightly coupled

exp eriments intuition was the main source for the values assigned to the various parameters

My exp eriences with OTTER have taught me ab out tendencies but I can oer no rm rules

for accurately making choices guaranteed to bring reward Some guidelines for the use of

OTTER are provided in Chapter of Wosa and some tendencies are discussed in

Wos In general a rst exp eriment provides useful information b oth p ositive and

negative regarding how to pro ceed in later exp eriments Here are values I assigned for

Exp eriment with comments to follow the full input le is presented in the App endix

Far fewer details will b e given for the second through the fourth exp eriments

assignmaxweight

assignchangelimitafter

assignnewmaxweight

assignmaxproofs

assignmaxdistinctvars

assignpickgivenratio

assignmaxmem

assignreport

assignheat

assigndynamicheatweight

With virtually no exception I nd it wise to place an upp er b ound on the complexity

of newly retained conclusions whether measured in terms of symbol count or based on

included weight templates without such a b ound the program may quickly drown In

part inuenced by the fact that the Meredith axiom is expressed with symbols counting

its predicate symbol and in part by exp eriments in twovalued sentential calculus that

suggested ro om must b e given for formulas longer than the hypothesis I chose the value

for the value to assign to max weight the upp er b ound just cited Esp ecially b ecause

but one axiom is present from which to a fair amount of ro om must b e given for

retaining new conclusions However b eing almost certain that memory would b e consumed

weight to after clauses were to o quickly if no action was taken I reduced the max

chosen to initiate applications of condensed detachment the inference rule in use As shown

OTTER was assigned the value for max mem enabling it to rely on megabytes

for its search

To p ermit the program to o ccasionally fo cus on a retained conclusion of much complex

given ratio As it turned out twentyone ity p ossibly I assigned the value to the pick

of the conclusions chosen to initiate inferencerule application had a weight complexity of

By assigning the value the program was instructed to choose clauses by complexity

then by rstcome rstserved then then and the like

To complete as many pro ofs as memory would allow I assigned the value innity

to the max pro ofs parameter To follow the activity of OTTER I asked for a statistical

rep ort every CPUseconds

Regarding the assignment of the value to max distinct varspreventing the program

from retaining any new conclusion if it relied up on or more distinct variablesI regret to

note that prior knowledge was used However such reliance was not necessary Indeed if

the value is assigned OTTER quickly informs the researcher that nothing of value will

result If the value is assigned again nothing of consequence o ccurs but two clauses are

deduced and none are retained

The last assignment inuenced by my joint attack with Fitelson on diverse questions

from mathematics and logic was the value to the heat parameter Again prior knowledge

came into play Sp ecically he noted the o ccurrence of sequences of steps in pro ofs in

systems relying on a single axiom all of which had as a parent the single axiom and as the

other parent the preceding step Access to such sequences is made far far easier by use

of the hot list strategy with a mo derately high value assigned to heat and with the single

axiom placed in the initial hot list As it turned out in Exp eriment only three clauses

with heat were chosen as inferencerule initiator two with heat two with heat

and six with heat

Were any of those clauses crucial for pro of completion A glance at the corresp onding

output le showed that one of the clauses with heat was used in six dierent pro ofs

therefore although the value often do es serve well the value would have suced at this

p oint which can only b e known after the fact Evidence was thus present for the value of

assigning a for me unusually high value to heat seldom do I rely on a value greater than

Indeed the rst of the four key factors came into play

As for the set options hyperresolution was set b ecause of reliance on condensed

detachment The option order history was set to p ermit the user to see which items were

paired with the ma jor premiss and which with the minor premiss in the threeliteral clause

for condensed detachment Finally out of habit input sos rst was set to cause OTTER to

choose for inferencerule initiators all clauses in the input set of supp ort b efore choosing any

deduced clause Of course since only one clause was included in the initial set of supp ort

that for Merediths axiom the command was unneeded

By assigning the value to the heat parameter and by placing the Meredith axiom

with the clause for condensed detachment in the input hot list I provided a means for

the program to emphasize the role of the Meredith axiom OTTER was thus encouraged

to seek a pro of that shared with Merediths pro of found in the App endix what might

b e termed the recursive emphasis on the single axiom However previous failures with

attempts to nd in a fully automated fashion a pro of more or less emulating that of

Meredith coupled with previous successes with other theorems when termavoidance was

imp osed strongly suggested that the program should b e prevented from relying on double

negation which Merediths pro of relies up on Of the fortyone steps of what amounts to

his pro of seventeen rely on the use of a term of the form nnt for some term t The

following set of demo dulators was used to achieve the cited term avoidance

listdemodulators

nnx junk

ixjunk junk

ijunkx junk

njunk junk

Pjunk T

endoflist

The fourth of the four factors was thus brought into play

With the programs reasoning restricted by blo cking the use of double negation I

turned next to directing its reasoning Of course the set of supp ort strategy was b eing

used to restrict the programs reasoning but b ecause of the fo cus on a single axiom and on

condensed detachment in the obvious sense its use added little to the programs p ower

Resonance was the choice for directing the reasoning But which resonators were app ealing

resonators that must b e chosen without knowledge of Merediths original pro of I chose

the Lukasiewiczs sixtyeight theses which he numbered through A resonator cor

resp onding to each of the sixtyeight was placed in the pick and purge weight list with an

assignment of the value The only other resonator that was included was that corresp ond

ing to the Meredith axiom assigned a value of As evidence that the deck was not b eing

stacked I note that only three of the sixtyeight theses are among the fortyone steps of the

Meredith pro of

In the context of eventual lemma enrichment the same sixtyeight theses as well as

a few other lemmas each negated were placed in the passive list with the intention of

adjoining in a later run any that were proved Although far less imp ortant the programs

success in proving such theses was also used by me as a sign of progress indeed I often

include negations of various lemmas in the passive list to evaluate and monitor the programs

attack The second of the four cited factors was thus in evidence

Exp eriment was encouraging proving sixteen lemmas Of the three Lukasiewicz

lemmas of which the target consists L was proved Regarding members of the other

known axiom systems for twovalued sentential calculus whose negations were placed in the

usable list theses and were also proved Something else also o ccurred that I had not

anticipated some lemmas were proved more than once Indeed in this rst exp eriment

thesis was proved in four dierent ways four dierent conclusions were drawn such that

each provided unit conict with the negation of thesis

The stage was thus set for the second exp eriment diering from the rst in that the

max weight was increased from to the max distinct vars was increased from to

and the lemmas proved in Exp eriment were adjoined to the initial set of supp ort for

Exp eriment

Exp eriment was even more encouraging proving fteen more lemmas including L

and regarding members of the included known axiom systems theses and

as well as proving MV which is one of the ve axioms Lukasiewicz originally oered for

a weaker area of logic manyvalued sentential calculus Similar to Exp eriment and

even more to my surpriseMV was proved twice each pro of terminating in a conclusion

more general than MV Although the target was still the Lukasiewicz threeaxiom system

OTTER and the metho dology were getting quite close to proving other axiom systems any

of which would have b een acceptable

In Exp eriment the third of the cited four factors now came into play namely the

addition of resonators removing duplicates corresp onding to pro of steps of lemmas

proved in Exp eriments and Each was assigned the value to instruct OTTER to give

any matching and retained new conclusion very high preference for initiating inferencerule

application Also the initial set of supp ort used for Exp eriment was extended by adjoining

fteen lemmas This third exp eriment proved L and it also noted that the entire three

axiom system of Lukasiewicz had b een proved As evidence of how hard OTTER had to

work the pro of of L completed with the retention of clause

Elation was certainly my reaction coupled with b eing startled b ecause the megabytes

of memory allotted for the exp eriment were exhausted almost immediately after L was de

duced Of course the task was not nished indeed a standalone pro of had not yet b een

pro duced In particular what was available at the completion of Exp eriment was a pro of

resting on the use of lemmas proved in that exp eriment and on lemmas proved in each of

the two preceding exp eriments Also in the spirit of recursion some of the used lemmas

from Exp eriment rested on lemmas proved in Exp eriment I strongly preferred to nd

a means to pro duce the standalone pro of without untangling the cited dep endencies

I thus tried Exp eriment in which I commented out all of the items in the initial set of

supp ort of Exp eriment other than the Meredith single axiom That move was required to

enable OTTER to attempt to nd a standalone pro of I also added two sets of resonators

with the intention of aiding the program based on my success in Exp eriment The rst

set of ftyeight corresp onded to the claimed pro of of the target axiom system the second

set corresp onded to pro of steps sorted from Exp eriment not already used The value

was assigned to the members of the rst set and the value was assigned to the members

of the second set

The hard work paid o Exp eriment yielded the soughtafter pro of of the Lukasiewicz

threeaxiom system in less than CPUseconds a pro of of length applications of

condensed detachment and level That pro of was followed by pro ofs of the Hilb ert

system the Church system the alternate system of Lukasiewicz and my axiom system

Virtually required at this p oint is a pause to consider the claim that the step pro of

was obtained in a fully automated manner Stated dierently how much did I cheat First

of the steps only are among the theses through which were the key resonators

for the rst exp eriment the step Lukasiewicz pro of contains but three of the cited

theses This evidence nicely addresses the question of cheating Nevertheless there remains

the question of how closely tied the step pro of is to Merediths pro of Of the steps

of the Meredith pro of that are free of double negation are among the steps of the

pro of yielded by the metho dology A natural question immediately arises concerning how

crucial are the steps viewed as lemmas a question that p erhaps is answered in Section

The primary ob jective had b een reacheda fully automated pro of that Merediths

axiom is sucient for the study of twovalued sentential or prop ositional calculus Nev

ertheless two additional ob jectives remained First and foremost the newly formulated

metho dology demanded to b e tested on another theorem in part to demonstrate that the

dice were not loaded to take advantage of the case just presented and in part to gain some

insight into the p ower of the metho dology Second as I so often do I wished to make a

serious attempt to nd a pro of substantially shorter than the step pro of still avoiding

the use of double negation

An Algorithm for Finding Shorter Pro ofs

The fo cus in this section is on the second of the two cited ob jectives pro of shortening

delaying the b est for last The forces that lead to seeking a pro of shorter than that in hand

are many and diverse ranging from the intellectual to the practical At one end of the

sp ectrum the curiosity of researchers is at stake Can a pro of b e found that avoids the use

of one or more sp ecied lemmas All b eing equal the shorter the pro of the more likely

that unwanted lemmas can b e disp ensed with Can it b e proved that the shortest p ossible

pro of has b een pro duced At the other end of the sp ectrum in contrast to simple curiosity

economics comes into play For example if the shorter pro of is completed in the context of

circuit design then a corresp ondingly more ecient circuit may have b een constructed

OTTER already has an approach to nding shorter pro ofs namely the use of ancestor

subsumption which works in the following manner First note that the clause A properly

subsumes the clause B if and only if A subsumes B but B do es not subsume A Second note

that the derivation length of the clause A is the number of distinct steps in the deduction of

A not including those that are among the input thus the derivation length is equal to the

number of applications of the inference rule or rules used to deduce A Finally by denition

the clause A ancestorsubsumes the clause B if and only if A prop erly subsumes B or

A and B are alphab etic variants and the derivation length of A is less than or equal to

that of B If ancestor subsumption is in use and if the length of a derivation of a clause is

strictly less than that of an alreadyretained copy of that clause then if back subsumption

is in use which is strongly recommended the copy is retained in the set of supp ort list

with the original copy of the clause purged from the database used for reasoning and

hidden to b e recalled in the event it o ccurs in a completed pro of

While ancestor subsumption is often eective in nding shorter pro ofs researchers

quite naturally prefer an algorithm The algorithm I developed rst discussed in print

in Wosa has exhibited unanticipated p ower Although its use do es not guarantee

the return of a strictly shorter pro of at least the rst few applications typically are quite

rewarding Interesting although p erhaps not obvious is the fact that the use of ancestor

subsumption can result in a program completing a longer pro of than it do es in its absence as

o ccurred in one of my exp eriments with the Meredith single axiom Briey the explanation

rests with the p ossible exclusion of intermediate steps that if present would have b een

rep eatedly used later The algorithm works in the following way

The algorithm requires access to a pro of Let the pro of length b e k the number

of deduced steps not counting those in the input For the rst step of the algorithm

the program is asked to make k runs In each run one of the k deduced pro of steps is

prevented from b eing used Although weighting suces I prefer to rely up on demo dulation

demo dulating the unwanted conclusion to junk The use of weighting can blo ck items that

are similar are cousins where all variables are treated as indistinguishabl e the use of

demo dulation will not have this eect and instead blo cks items that are subsumed by

the item b eing blo cked In the second step all of the pro ofs completed in the k runs are

examined to see whether a shorter pro of has b een completed A run may yield more than

one pro of for example if ancestor subsumption is used Let j b e the length of the shortest

pro of found with the k runs If j is greater than or equal to k cease the attack If such

is the case b ecause the algorithm is usually applied rep eatedly a shorter pro of may have

already b een found shorter than the one that initiated the investigation If j is strictly less

than k then apply the third step For the third step the fo cus in on a pro of of length j

with the ob jective of nding a still shorter pro of Two choices exist the program can

b e asked to avoid the use of each of the j steps of the pro of in fo cus and the program

can b e asked to avoid the use of each of the k steps of the initiating pro of

A variation on the algorithm that I have found quite eective is to ask the program

to attempt to complete a pro of when all of the steps that individuall y pro duced a pro of

of length j with j strictly less than k are simultaneously prevented from b eing used An

alternative stop condition has also proved of use Sp ecically if say at least ten dierent

steps when blo cked yield a pro of of length j with j strictly less than k terminate the attack

By using this algorithm which b egan by fo cusing on the step pro of yielded by

applying the metho dology presented earlier coupled with frequent use of ancestor sub

sumption I was able to nd a far more attractive pro of for the Meredith single axiom one

free of double negation The length of that pro of is

However earlier exp eriments had yielded a step pro ofwhich I shall call pro of

a pro of found with the algorithm and with other techniques b eginning with a step

pro of one whose origin may have b een the metho dology but p erhaps not Therefore I

pressed on but with no progress Sometimes in a later run I include more than one new

demo dulator choosing to add those that indicate the use of each would pro duce a pro of of

length j with j strictly less than k

To seek a breakthrough I chose to blo ck an entire set of formulas all of those matching

one of the resonators present in the step pro of A few exp eriments suced resulting in

the nding of another step pro ofwhich I shall call pro of But again a stone wall was

encountered which led me to one more move

After the set of resonators corresp onding to pro of I included as resonators the steps

corresp onding to pro of I was not hop eful susp ecting that pro of would simply b e

repro duced Fortunately my conjecture was in error A step pro of of level was found

instead see the App endix The most likely explanation for this sharp advance was that

clauses that would otherwise have b een discarded b ecause of b eing assigned a weight strictly

greater than the assigned max weight were retained and used to initiate inferencerule

application Their use almost certainly was intermingled with the use of clauses matching

one of the steps of pro of In other words the program was given more latitude regarding

this one asp ect which in turn resulted in a severe p erturbation of the space of conclusions

b eing examined

Piquant however is the fact that the metho dology for nding the rst pro of in general

relies on giving the program more and more latitude in contrast to giving the program in

general less latitude when the ob jective is to nd a far shorter pro of based on the rst pro of

Adding elements to the set of supp ort for a succeeding run as is part of the metho dology

for nding a rst pro of is in a sense the s complement to blo cking the use of items by

means of demo dulation when seeking to improve up on an existing pro of

In the context of nding short pro ofs a sharp contrast exists b etween blo cking the

use of an unwanted step with demo dulation and using a resonator for that purp ose The

former blo cks the use of the item b eing demo dulated to junk and also all instances of it

but has no eect on items that are merely similar in functional shap e The latter blo cks

the formula or equation in the corresp onding resonator has no eect on prop er instances

of it but prevents the use of formulas or equations that are similar in functional shap e

that match the item in the resonator sense In other words demo dulation restricts the

programs reasoning by blo cking the use of one set of conclusions whereas resonance blo cks

the use of a quite dierent set of conclusions When a given demo dulator is shown to

prevent the use of two conclusions a subsumption relation exists b etween them When a

resonator do es so no subsumption relation exists b etween the two items To complete this

asp ect of the discussion if none of the related subsumption options are exercised use of a

hint Vero fo cuses on a formula rather than on a set of formulas as resonance do es

Now I turn to data relevant to the observations and the question raised near the end

of Section First the step pro of relies on but four of the theses through used

to nd the step pro of Therefore are the theses needed at all or instead is their

presence the key to fo cusing on crucial steps Secondand fascinating to me b ecause of

what it seems to imply ab out Meredithof the steps are present in the Meredith

step pro of of its that do not rely on double negation Are those steps at least

almost all of them so crucial to nding a pro of esp ecially in the context of nding a short

pro of Therefore to nd a pro of shorter than length do es it mean that the cited or

almost all of them must b e present Answers to questions of this type are p ertinent to the

larger questions Can a pro of of minimal length b e found and what is that length Third

regarding the two step pro ofs p erhaps the fact that they dier by steps played a key

role in breaking through to a step pro of Fourth only four of the steps require the

use of seven distinct variables In view of the fact that this pro of is free of double negation

do es it contain the keys to completing a doublenegationfree pro of none of whose formulas

requires the use of more than six distinct variables

Validating the Metho dology

To return to the rst ob jective all agree that a metho dology requires testing and evaluating

In this section the fo cus is on one of the ner tests that could b e applied The test removes

all doubt that some form of diceloading o ccurred in the conquest of the Meredith single

axiom in turn removing all doubt that prior knowledge even if implicit or hidden from the

researcher is required

Sp ecically to test the metho dology a theorem other than that concerned with Mered

iths single axiom was required Even b etter if success were to o ccur would b e a purported

theorem or a theorem whose pro of was unavailable absent entirely from the literature and

from the researchers knowledge The target for testing was in eect supplied by my col

league Fitelson roughly ve months b efore the metho dology was formulated That target

was a letter single axiom oered by Lukasiewicz for twovalued sentential or prop osi

tional calculus the following

Following is Lukasiewiczs letter single axiom

Piiixyiiinznuvziwiizxiux

I was told that no pro of was given and that apparently Meredith knew of none but

presumably Lukasiewicz did and so the game was on A condensed detachment pro of must

exist if only it could b e discovered What would b e a reasonable choice for establishing

that the given axiom suces I chose the threeaxiom system cited earlier in Section of

Lukasiewicz

With but four runs as if already scripted the metho dology prevailed Indeed in

the third exp eriment OTTER completed an step pro of of level of the threeaxiom

Lukasiewicz system However as with Meredithsince lemmas proved in preceding runs

were relied up onthere remained the task of pro ducing a pro of dep endent solely on the

Lukasiewicz letter single axiom Adhering to the metho dology I then ran a fourth

exp eriment and obtained the desired pro of one of length and level As a p oint of

interest and p erhaps suggestive of future research the step pro of relies on but eight

of the theses and seven of the steps require the use of six distinct variables

If max distinct vars is assigned the value no new conclusions are pro duced correctly

implying that is the minimum

The unavailability in the literature of any pro of for the Lukasiewicz letter single

axiom strongly suggests that OTTERs success is indeed singular I do view the result

as startling and as satisfying evidence that the metho dology provides a p owerful aid for

answering op en questions and for nding missing pro ofs Also of note is the absence of

double negation in the pro of

As might b e exp ected I next sought a far far shorter pro of I turned to the coupling

of ancestor subsumption and the algorithm of Section The result was the completion of

a step pro of of level Three of its steps require the use of six distinct variables of the

steps are present in the initiating step pro of eight steps are among theses

When the information ab out the step pro of and the step pro of was conveyed to

a colleague of mine she asked the following interesting question Should the step pro of

b e considered a pro of with far less depth when compared with the step pro of Quite the

contrary the step pro of most likely would have remained hidden for decades were it not

for access to the step pro of Most likely from the viewp oint of CPU time an attempt

to nd the step pro of without knowledge of the longer pro of would have o ccupied a very

fast computer for yearsif a short pro of ever would have b een completed An unassisted

researcher sometimes approaches a problem in a manner quite similar to that taken by a

researcher assisted by an automated reasoning program

Future Research

Research that succeeds can pro duce a type of excitement and exhilaration that dees de

scription and that is dicult to equal Research can also pro duce much frustration and

even b e addictive For example I myself nd tantalizing and even maddening the still

unsuccessful search for a pro of shorter than Merediths step pro of of his single axiom

Butas was the case with the Lukasiewicz letter single axiomwhen a theorem is an

nounced without pro of and research yields that pro of little doubt remains concerning the

balance of pain versus pleasure

In this concluding section I present research suggestions and observations Some of

the suggestions may aid a p erson who has wisely chosen to team with an automated

reasoning program such as OTTER to nd a pro of that resists completion or nd a shorter

pro of than that in hand Some fo cusing on intriguing problems may instead provide new

areas of research Of a more general nature the observations may stimulate research by

providing insight into the contrast b etween the attack of a reasoning program and that of

a researcher The observations are paraphrasings from an email note received from my

colleague Fitelson

My rst suggestion for the researcher seeking a more elegant shorter pro of than oered

by the literature is to b eware of the cited pro of length Indeed for the following

the cited length may b e misleading First logicians such as Lukasiewicz and Meredith

often do not apply a most general unier b eing content instead with less generality and

the use of instantiation Care therefore must b e exercised in simply accepting quoted

pro of length Second sometimes the implication that condensed detachment is the only

rule of inference in use is not accurate Third sometimes researchers rely on metatheorems

and identities not part of the ob jectlevel theory such as the equivalence of nnt and

t Such moves are apparently designed to avoid examination of deep levels within the

tree of p ossible conclusions Such moves also reduce the complexity of the expressions that

must b e considered indeed pro ofs often b ecome more readable Chess masters often rely

on similar socalled tricks In contrast a program such as OTTER do es not shy away from

examining deep levels or fo cusing on complex expressions

Nevertheless to succeed an automated reasoning program also requires moves that

sharply reduce the size of the search space strategies are an excellent example Blo cking

the retention of formulas in which double negation is present is such a movealthough it

seems to b e counterintuitive one that an unaided researcher would not make Restricting

the number of distinct variables is yet another counterintuitive move as is turning up the

heatbut these actions often are surprisingly eective

Once the target is established exp erimentation is essentialwhether the maddening

hunt is for a new data structure to sharply increase program p erformance a new strategy

to restrict or direct reasoning in general a new metho dology or a new pro of Admittedly

exp erimentation can b e frustrating and progress slow and dicult to measure However

I suggest that the researcher who nds a new pro ofeven if its length is disapp ointing

carefully examine the result Indeed examination of the new pro of can pro duce questions

of the following type questions whose answers can in turn lead to further advances If the

new pro of is shorter than that of the original are there steps from the original pro of that

are used in a dierent way to cover for the steps that have b een removed Is there a chain

of in the original pro of that is absent in the new pro of Are there steps in b oth

pro ofs that are essential for any pro of are indisp ensable lemmas In the case in which a

single axiom is b eing studied with condensed detachment the rst condensed detachment

must fo cus on two copies of the axiom and therefore the result must b e present in all

pro ofs of all theorems that can b e proved A glance at this rst step establishes a lower

b ound on the number of distinct variables as well as a lower b ound on the length of formulas

app earing in a pro of

And now for the curious who are ready eager for a challenge I oer several research

problems The rst involves the Meredith single axiom Like Merediths step pro of my

step pro of b oth step pro ofs the step pro of and the step pro of all require the

use of formulas relying on seven distinct variables Formulas in seven distinct variables are

not required to derive the Lukasiewicz threeaxiom system from the Meredith single axiom

Indeed by extending the research of my colleague Deepak Kapur who found a pro of of

length requiring no more than six distinct variables in any formula I was able to nd a

step pro of with that prop erty and still later with a dierent approach I found a step

pro of of the same type However the sixvariable pro ofs known to me all rely on the use

of double negation Whether there exists a pro of requiring no more than six variables but

that is free of double negation is at this time in mid an op en question

A quite dierent research question fo cuses on the application of the algorithm presented

coupled with the use of ancestor subsumption to yield a pro of of length strictly less than

b eginning with the step pro of Also of interest is the application of the metho dology

to the case in which the Lukasiewicz threeaxiom system or that of Church for example

is taken as the starting p oint with the ob jective b eing the Meredith single axiom or the

Lukasiewicz letter single axiom

Turning to manyvalued sentential calculus one may apply the metho dology featured

here to nd pro ofs of key theorems Examples include the fty lemmas found in Rose and

Rosser Rose and theorems concerned with asso ciativity and distributivity and p ossible

generalizations of these laws

Also merited is research that leads to a generalization of the metho dology central to

this article to problems in which equality plays a key role

Welcome are theorems coupled with a pro of with the ob jective of nding a shorter

pro of Finally of paramount interest is a pro of dep ending solely on condensed detachment

of length less than or equal to and showing that the Meredith single axiom implies the

threeaxiom system of Lukasiewicz

App endix

Input File for Experiment

Trying for an automated proof of Merediths single axiom

sethyperres

assignmaxweight

assignchangelimitafter

assignnewmaxweight

assignmaxproofs

clearprintkept

setprocessinput

setancestorsubsume

clearbacksub

clearprintbacksub

assignmaxdistinctvars

assignpickgivenratio

assignmaxmem

assignreport

setorderhistory

setinputsosfirst

assignheat

assigndynamicheatweight

weightlistpickandpurge

Following are theses through

weightPiiiixyizyuiizxu

weightPiixiyziiuyixiuz

weightPiixyiiixzuiiyzu

weightPiixiiyzuiiyvixiivzu

weightPiixyiizxiiyuizu

weightPiiinxyzixz

weightPixiiinxxxiiyxx

weightPiixiinyyyiinyyy

weightPixiinyyy

weightPiinxyiziiyxx

weightPiiixiiyzzuiinzyu

weightPiinxyiiyxx

weightPixx

weightPixiiyxx

weightPixiyx

weightPiiixyziyz

weightPixiixyy

weightPiixiyziyixz

weightPiixyiizxizy

weightPiiixiyzuiiyixzu

weightPiiixyxx

weightPiiixyziixuiiuyz

weightPiiixyziizxx

weightPiiixyyiiyxx

weightPiiiixyyziiiyuxz

weightPiiixyziixzz

weightPiixixyixy

weightPiixyiiixzuiiyuu

weightPiiixyziixuiiuzz

weightPiixyiiyizixuizixu

weightPiixiyizuiizxiyizu

weightPiixiyziixyixz

weightPinxixy

weightPiiixyzinxz

weightPiixnxnx

weightPinnxx

weightPixnnx

weightPiixyinnxy

weightPiiinnxyziixyz

weightPiixyiiynxnx

weightPiixiynziizyixnz

weightPiixiyziinzyixz

weightPiixyinynx

weightPiixnyiynx

weightPiinxyinyx

weightPiinxnyiyx

weightPiiinxyziinyxz

weightPiixiyzixinzny

weightPiixiynzixizny

weightPiinxyiixyy

weightPiixyiinxyy

weightPiixyiixnynx

weightPiiiixyyziinxyz

weightPiinxyiixziizyy

weightPiiiixyiiyzzuiinxzu

weightPiinxyiizyiixzy

weightPiixinyzixiiuziiyuz

weightPiixyiizyiinxzy

weightPiinnxyixy

weightPixiyy

weightPinixxy

weightPiinxniyyx

weightPinixyx

weightPinixyny

weightPinixnyy

weightPixinynixy

weightPixiynixny

weightPniixxniyy

Following is Merediths axiom

weightPiiiiixyinznuzviivxiux

Following is recursive tail strategy

weighti

endoflist

listusable

condensed detachment

Pixy Px Py

The following disjunctions are known axiom systems

Piqipq Piipiqriipqipr Pinnpp Pipnnp

Piipqinqnp Piipiqriqipr

ANSstepallFrege is dependent

Piqipq Piipiqriqipr Piiqriipqipr

Pipinpq Piipqiinpqq Piipipqipq

ANSstepallHilbert is dependent

Piqipq Piipiqriipqipr Piinpnqiqp

ANSstepallChurch

Piiipqriqr Piiipqrinpr Piinpriiqriipqr

ANSstepallLuka

Piiipqriqr Piiipqrinpr Piisinprisiiqriipqr

ANSstepallWos

Piipqiiqripr Piinppp Pipinpq

ANSstepallLuka

endoflist

listsos

Following is Merediths axiom

Piiiiixyinznuzviivxiux

endoflist

listpassive

Following are the Lukasiewicz three axioms

Piipqiiqripr ANSstepL

Piinppp ANSstepL

Pipinpq ANSstepL

Following are Desired associativity lemmas

Piiiaiibcciibcciiiiabbcc ANSlemmaa

Piiiiiabbcciiaiibcciibcc ANSlemmab

Following are Four distributivity theorems for which proofs are already known

Pinianiibcciinianbniancnianc ANSa

Piiinianbniancniancnianiibcc ANSb

Piinaniinbncncniininabninacninac ANSa

Piniininabninacninacinaniinbncnc ANSb

Two unproven but provable distributivity theorems

Piniinaniibccniibcciiniinanbnbniinanc

ncniinancnc ANSKAdist

Piiiniinanbnbniinancnc niinancnc

niinaniibcc niibcc ANSKAdist

three old favorites

Piiiabiaciibaibc ANS

Piiiicaibaibciiicbiabiac ANSstar

Piiiabibaiba ANSMV

Following are negations of theses through

Piiiiqriprsiipqs ANSnegth

Piipiqriisqipisr ANSnegth

Piipqiiiprsiiqrs ANSnegth

Piitiiprsiipqitiiqrs ANSnegth

Piiqriipqiirsips ANSnegth

Piiinpqripr ANSnegth

Pipiiinpppiiqpp ANSnegth

Piiqiinpppiinppp ANSnegth

Pitiinppp ANSnegth

Piinpqitiiqpp ANSnegth

Piiitiiqppriinpqr ANSnegth

Piinpqiiqpp ANSnegth

Pipp ANSnegth

Pipiiqpp ANSnegth

Piqipq ANSnegth

Piiipqriqr ANSnegth

Pipiipqq ANSnegth

Piipiqriqipr ANSnegth

Piiqriipqipr ANSnegth

Piiiqiprsiipiqrs ANSnegth

Piiipqpp ANSnegth

Piiiprsiipqiiqrs ANSnegth

Piiipqriirpp ANSnegth

Piiipqqiiqpp ANSnegth

Piiiirppsiiipqrs ANSnegth

Piiipqriiprr ANSnegth

Piipipqipq ANSnegth

Piipsiiipqriisrr ANSnegth

Piiipqriipsiisrr ANSnegth

Piipsiisiqipriqipr ANSnegth

Piisiqipriipsiqipr ANSnegth

Piipiqriipqipr ANSnegth

Pinpipq ANSnegth

Piiipqrinpr ANSnegth

Piipnpnp ANSnegth

Pinnpp ANSnegth

Pipnnp ANSnegth

Piipqinnpq ANSnegth

Piiinnpqriipqr ANSnegth

Piipqiiqnpnp ANSnegth

Piisiqnpiipqisnp ANSnegth

Piisiqpiinpqisp ANSnegth

Piipqinqnp ANSnegth

Piipnqiqnp ANSnegth

Piinpqinqp ANSnegth

Piinpnqiqp ANSnegth

Piiinqpriinpqr ANSnegth

Piipiqripinrnq ANSnegth

Piipiqnripirnq ANSnegth

Piinpqiipqq ANSnegth

Piipqiinpqq ANSnegth

Piipqiipnqnp ANSnegth

Piiiipqqriinpqr ANSnegth

Piinpriipqiiqrr ANSnegth

Piiiipqiiqrrsiinprs ANSnegth

Piinpriiqriipqr ANSnegth

Piisinprisiiqriipqr ANSnegth

Piipriiqriinpqr ANSnegth

Piinnpqipq ANSnegth

Piqipp ANSnegth

Pinippq ANSnegth

Piinqnippq ANSnegth

Pinipqp ANSnegth

Pinipqnq ANSnegth

Pinipnqq ANSnegth

Pipinqnipq ANSnegth

Pipiqnipnq ANSnegth

Pniippniqq ANSnegth

endoflist

listdemodulators

nnnx junk

nnx junk

iixxy junk

iyixx junk

inixxy junk

iynixx junk

ixjunk junk

ijunkx junk

njunk junk

Pjunk T

endoflist

listhot

Pixy Px Py

Following is Merediths axiom

Piiiiixyinznuzviivxiux

endoflist

Merediths Proof

EMPTY CLAUSE at sec hyper ANSWERLuka

Length of proof is Level of proof is

PROOF

Pixy Px Py

Piipqiiqripr Piinppp Pipinpq

ANSWERLuka

Piiiiixyinznuzviivxiux

hyper Piiiixyizyiyuiviyu

hyper Piiixinyzuiyu

hyper Piiixxyizy

hyper Pixiyizz

hyper Piiixiyyziuz

hyper Piiixyziyz

hyper Pixiixyizy

hyper Pixiiiyxziuz

hyper Piiiiixinyzuiyuviwv

hyper Piiiiixiiiyzinunvuwivwyivy

hyper Piiixyizinnyuivizinnyu

hyper Piiixyiziiiyuinvnxviwiziiiyuinvnxv

hyper Pixiiyzinnyz

hyper Pixiiiyzuiiizvinunyu

hyper Piixyinnxy

hyper Piiiixyinnxyziuz

hyper Piiixyziiiyuinznxz

hyper Pixinnyy

hyper Piiixyiniiiziuxviwvnuiiiziuxviwv

hyper Piiixinnyyziuz

hyper Piiiiixiyizuviwvzivz

hyper Piiixyiziuiyviwiziuiyv

hyper Pixiiyiyziuiyz

hyper Piixixyizixy

hyper Pixiiyiyziyz

hyper Piixixyixy

hyper Pixinxy

hyper Piiixinnyyzz

hyper Piiiixyinnxyzz

hyper Pinniixinnyyzz

hyper Piinxxiyx

hyper Piiixinniiyinnzzuuviwv

hyper Piinxxx

hyper Piiixniiyinnzznuviuv

hyper Pixniiyinnzznx

hyper Piiixiyniizinnuunyviwv

hyper Piiixiyniizinnuunyvv

hyper Piixyiiizinnuunnxy

hyper Piiixinnyynnizuiiivinnwwnnzu

hyper Piiiiixinnyynnzuviizuv

hyper Piixyiiyzixz

hyper ANSWERLuka

end of proof

A Step Proof Free of Double Negation for the Meredith Single Axiom

EMPTY CLAUSE at sec hyper

ANSstepallLuka

Length of proof is Level of proof is

PROOF

Pixy Px Py

Piipqiiqripr Piinppp Pipinpq

ANSstepallLuka

Piiiiixyinznuzviivxiux

Pixy Px Py

Piiiiixyinznuzviivxiux

hyper Piiiixyizyiyuiviyu

heat hyper Piiixinyzuiyu

heat hyper Piiixxyizy

hyper Pixiyizz

heat hyper Piiixiyyziuz

heat hyper Piiixyziyz

heat hyper Pixiixyizy

hyper Pixiiiyxziuz

hyper Pixiiiyinxzuivu

heat hyper Piiiiixiiiyzinunvuwivwyivy

heat hyper Piiixyiziiiyuinvnxviwiziiiyuinvnxv

hyper Pixiiiyzuiiizvinunyu

heat hyper Piiixyziiiyuinznxz

heat hyper Piiixyiniixziuzniiizvinwnuwiixziuz

hyper Piiixyiniiiziuxviwvnuiiiziuxviwv

hyper Piiixyiniiizxuivunziiizxuivu

heat hyper Piiiiixiyizuviwvzivz

heat hyper Piiiiixiyzuivuyiwy

heat hyper Piiixyiziuiyviwiziuiyv

heat hyper Piiixyiziyuiviziyu

heat hyper Pixiiyiyziuiyz

heat hyper Piixixyizixy

hyper Piiniiixyziuznxiiixyziuz

hyper Pixiiiyinizuvwiuw

hyper Pixiiiyizuviuv

heat hyper Piiiiixiniyzuvizvwivw

heat hyper Piiiiixiyzuizuviwv

heat hyper Piiixyiziniunyviwiziniunyv

heat hyper Pixiiyziniunyz

heat hyper Piixyiniznxy

hyper Pixiiyiyziyz

heat hyper Piixixyixy

hyper Pixiiiyzuinyu

heat hyper Piiixyzinxz

hyper Pixinxy

hyper Piiixiyyzz

hyper Pixiniyinxzu

hyper Piixyinixzy

hyper Pinixniyziniyuz

hyper Pixiniyniyzz

hyper Piiixiniyniyzzuivu

hyper Piiixiniyniyzzuu

heat hyper Piiixnixnyziyz

hyper Pixiyizniiuivzny

hyper Pixiyiznizny

hyper Pixinyniiyznx

heat hyper Piiixiyniiziuynxviwv

heat hyper Pixiyniynx

heat hyper Piiixyziiiuivnynizwz

heat hyper Piiixiynzniiizuivuwiizuivu

heat hyper Piiiixyizyuixu

hyper Pixiyniynizz

heat hyper Pixnixniyy

hyper Piiixiynzniiiiuzviwvviiiuzviwv

heat hyper Piiiiixyziuzviyv

heat hyper Piinxnyiiizxuiyu

hyper Pixiixyy

hyper Piiixiyniynizzuu

hyper Piiixinyniiyznxuu

heat hyper Piixyiinxnizzy

heat hyper Piixyiiixzniyuy

hyper Piinxniyyiixzz

hyper Piiiinxniyyzniiixuuviixuu

heat hyper Piiiixyynxiznx

hyper Piiiixyynxnx

heat hyper Piinxxiyx

hyper Piniixyynx

hyper Piinxxx

hyper Piiixiiyzzuiyu

hyper Pixniiyiixzzniuu

hyper Piixyiiiziunxyy

hyper Piiixiyniiziuuvvv

heat hyper Piixyiiiziuuxy

hyper Piiixiyyizuiiiviwwzu

heat hyper Piiiiixiyyzuviizuv

heat hyper Piixyiiyzixz

hyper ANSstepallLuka

end of proof

References

McCune McCune W OTTER users guide Tech Rep ort ANL Argonne

National Lab oratory Argonne IL January

Rose Rose A and Rosser J B Fragments of manyvalued calculi

Trans AMS

Vero Vero R Using hints to increase the eectiveness of an automated reasoning

program Case studies J Automated Reasoning no

Wos Wos L The Automation of Reasoning An Experimenters Notebook with OT

TER Tutorial Academic Press New York

Wos Wos L Automating the search for elegant pro ofs J Automated Reasoning

no

Wosa Wos L with Piep er G A Fascinating Country in the World of Computing

Your Guide to Automated Reasoning World Scientic

Wosb Wos L with Piep er G The hot list strategy J Automated Reasoning

no

Wosc Wos L Lemma inclusion versus lemma adjunction AAR Newsletter July

httpwwwmcsanlgovAARissuejulyindexhtml