CS4860-2019fa Computational Foundations of

Robert L. Constable

September 20, 2019

Abstract

Much has been written about the foundations of mathematics, referencing Aristotle, , Frege, Whitehead, Russell, Brouwer, Church, among many others. Exploring the founda- tions is a central theme of mathematical . These course notes offer a pragmatic account of foundations based on the fact that a substantial amount of mathematics is concerned with computation, both numerical and synthetic (as in Euclidean ). This obser- vation alone leads to a clear and simple reason for understanding the computational facts that have rendered mathematics so important for more than two thousand years of human history. Steadily increasing computing power is making the role of logic more important as we illustrate in this course. There has never been a time when implemented thought tools grounded in mathematics are so fundamentally expanding our capacity to explore mathematics more deeply.

The 20th century advent of computers as proof assistants has re-centered mathematics, setting new goals in which computers using large digital libraries of formalized mathemat- ics are accelerating scientific discovery, finding unexpected connections among results, and solving long standing open problems. In due course this collaboration between humans and thinking machines will transform one of the oldest academic subjects, logic, into one of the most innovative and intellectually impactfull.

The nature of computer and the mechanisms it implements have created instances of machine intelligence that operate significantly beyond what unaided humans can achieve. Science-fiction visions look less imaginative than modern instances of the computational reality now supplanting them. We propose a research agenda for the role of machine intel- ligence in mathematical discovery.

1 1 Historical Context

There is a thread in the foundations of mathematics originating with Plato (415 BC to 369 BC) and his Academy in Athens (387 BC until 529 AD when it was closed by the Emperor Jus- tinian). Plato believed in truths that we can see with the eye of the mind. That belief underlies the philosophy of Platonism. Aristotle, a student of Plato’s Academy in Athens, regarded logic as preliminary to science and philosophy and applicable to all reasoning. He said this about two thousand four hundred years ago. More recently, about one hundred years ago, L.E.J. Brouwer also expressed in detail his belief that mathematics is grounded in our intuitions about numbers, algorithms, space, and most critically, time. Brouwer believed that mathematical objects are constructed mentally, starting with a few fundamental concepts such as natural numbers, points, and lines. The relationship of the progression of time and of the continuity of lines to the real numbers was of special interest to Brouwer. We will examine this relationship in detail because his insights and results convinced us to fully integrate his ideas into the logic of the Nuprl proof assistant.1

One of the mostly widely known Greek mathematicians is Euclid. He lived in Alexandria circa 300 BCE where he wrote his Elements [35]. This classic remains widely read and studied, to learn both geometry and logic. What is hard to believe is that we continue to deepen our knowledge of , [53, 68, 4, 5, 70, 71], now using proof assistants [54]. 2

The main thread of this discussion of mathematical foundations is the role of computation and the increasing role of proof assistants in discovery. This thread has a long history that grows more interesting by the day.

I am frequently reminded of a brief comment on the future of (intelligent) machines, by George B. Dyson from the Preface of his excellent book Darwin among the Machines [34]: “In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature I suspect, is on the side of the machines.”

2 Integrating Computation and Proof

Greek geometry is based on reasoning about ruler and compass constructions. Remarkably, to this day Greek construction methods and Euclidean geometry are taught around the world. We know from archeological discoveries circa 1900 that the Greeks built sophisticated calculating

1As of 2019, as far as we know, the Nuprl proof assistant is the only one that has fully implemented Brouwer’s remarkable insights [61]. √ 2Euclid’s 117 of Book X is essentially that 2 is irrational, but it was not included in Euclid’s original text, so it is nowadays omitted. Until this discovery, the Pythagoreans identified numbers with geometry.

2 devices for their study of astronomy. For example, the Antikythera mechanism is now mentioned as the first known “analogue computer”. Experts believe that this mechanical device was built during the Hellenistic period. Its design is based on Greek theories of astronomy and mathemat- ics developed during the second century BCE.

2.1 The Automation of Mathematical Reasoning

We can already document ways in which computer science (CS) is fast becoming a broadly trans- formative discipline. One measure is its popularity among students. The number of CS majors at the top universities has been steadily increasing. At Cornell there were 1,600 majors in 2019 and at MIT there were at least 1,500 majors. These numbers are starkly incomparable to those in other disciplines. The high demand for computer scientists is partly responsible for the fact that there are now entire colleges of computer science at major universities such as CMU, Cornell, MIT, and several others.

The unprecedented wealth and influence of leading CS companies such as Amazon, Apple, Face- book, Google, HP, IBM, Intel, and Microsoft among others is itself becoming an object of study [46, 45]. We know that current Chinese leaders aspire that their country will pioneer the best AI technology, from smart phones to smart drones. Other nations have similar goals, especially the United States and the EU, the pioneers of the discipline. This global scale ambition will attract large investments, and it will increase the need for creative, well educated computer scientists. The required investments will have unforeseeable impacts on mathematics, science, and the world economy.

Recent discoveries in AI, machine learning, and automated reasoning [69] already demonstrate that we can accomplish tasks that not long ago seemed beyond the reach of technology. Our boldest ambitions are already pointing the way to realizable “science fiction.”

In contrast to the physical , the case for the importance of computer science might seem transitory, perhaps driven by plentiful high paying jobs. Is there any computer science discov- ery comparable to those that unlocked the secrets of the atom and our understanding of the structure of the universe or the workings of the human brain? Are the mathematical problems posed by computer science any more compelling than the long standing open problems in pure mathematics such as the Riemann Hypothesis? 3 We will examine this question further in the article.

The P = NP problem posed by Cook and Levin [30, 60, 47] stands out as central to computer

3We know that a proof of the Extended Riemann Hypothesis will shed light on one version of a central open problem of computing theory, does P = NP?

3 science, yet easy to grasp. A solution has been beyond reach for almost fifty years, although many of the best computer scientists have tried to solve it. This problem has become emblematic of computer science as a discipline. That is, the problem is mathematically clear and precise. A positive solution, that P is equal to NP, would have far reaching practical consequences. It is almost certain that any solution will require transformative new ideas. But what if a machine solves the problem with an algorithm and a formal proof that we do not completely understand? What if that proof assistant has a flawless record and has solved less important open problems? What if the proof has also been checked by a second proof assistant that has never made a mistake and has solved less well known open problems?

What other compelling problems and questions will continue to draw ambitious thinkers to com- puter science? We look at a few exciting possibilities in this short article and present a new result.

2.2 Numerical Foundations

Reasoning about numbers is foundational and computational. Leopold Kronecker captured that idea with his remark in 1886 that is often repeated even today. He said essentially “God made the integers, all else is people’s work.” It might sound better in his original German: “Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.” We learn early that we can add up a list of numbers in any order and the result will be the same, e.g. 3,5,7,10,15 sums to 40 as does 15,3,7,10,5. We know the distributive law, a × (b + c) = (a × b) + (a × c) and the commutative laws, (a + b) = (b + a) and so forth. It is rare that students know how to compute with irrational numbers, such as multiplying e by π.

(2.7182818284590452353602874713527... × 3.141592653589793238 ...) = 8.53973422267 ....

We will see later in the course why it might be more appropriate to say this in German: “Die Fhigkeit zu Rechnen hat uns der liebe Gott gegeben, ...” Professor Weinberger told me that this translates more or less as “God gave us the ability to compute.”

People are much less familiar with arithmetic because it involves computing with unbounded sequences of digits. On the other hand, many students learn that “there are more real numbers than rational numbers” in the sense that we can enumerate the rational numbers, but we cannot enumerate the real numbers because there are “too many”.

Less widely known is that we can explain logic and the foundations of mathematics computa- tionally. As a very simple example, here are two ways of thinking about logical implication, symbolically (P ⇒ Q) for P and Q . We read this as “P implies Q”. Typically we

4 explain implication by looking at simple examples of propositions, such as (P & Q) ⇒ Q. In Boolean logic we explain implication by talking about the truth values of P and Q. An im- plication is “true” if P is false. Indeed, the logical proposition F alse implies anything. The implication (P ⇒ Q) is also true if Q is true regardless of the truth value of P.

Brouwer makes a case in his Cambridge Lectures for his intuitionistic views on the role of logic. He says that the first act of intuitionism is completely separating mathematics from language and hence from logic. He says that intuitionistic mathematics is an essentially language-less activity of the human mind having its origins in the perception of time. On the other hand, he played a significant role in the history of logic by defining intuitionistic logic. One of the major pillars of this logic is the rejection of the law of excluded middle, the claim that for any proposition P , we know P ∨ ¬P. He claims that we know ¬¬(P ∨ ¬P ). It is quite easy to see this using intuitive logic. One can even say that the intuitionistic laws of logic, studied by Heyt- ing and Kleene, are based on our understanding of evidence. When we understand the forms of evidence, we can then justify a rich logic by intuitive reasoning. That reasoning justifies a rich logical system studied in detail by Heyting [50] and later by Kleene and Vesley [57]. Kleene presented these ideas in his widely read logic textbook Introduction to [55, 56].

Brouwer also wrote about a group called the pre-intuitionists, Poincar`e,Borel, and Lebesgue. He says that this group thought that it was acceptable to use classical logic to study the continuum because they had an impoverished view of the continuum. Brouwer was also critical of Hilbert’s views in which he believed that in meta-mathematics it was important to use constructive rea- soning [52]. These views led G¨odelto study the limits of Hilbert’s approach. In doing this, G¨odelwas led to his famous incompleteness [39].

Less commonly but more “computationally” we say that if we have evidence p for why we believe P and evidence q for why we know Q, then we can package evidence for P &Q as an ordered pair, < p, q >. We sometimes say that this method is based on evidence semantics [26]. To have evidence for an implication P ⇒ Q is to know an algorithmic method f to convert evidence p for P into evidence f(p) for Q.

We know (P & Q) ⇒ (Q ∨ P ) based on our intuitive understanding of disjunction (“or”). We use tagged evidence, inl(p) and inr(q) to define disjunction. The tag inr stands for “in right” and inl stands for “in left.” These tags are related to English, but they are essentially universal. In fact around the world people say “inl” for “in left” and “inr” for “in right.”

Understanding logical is more subtle. To say that 5 ≤ 2 is false, we say that if we had evidence for that, there would be a mistake, and we could convert it into evidence for anything. To make that explanation compact, we say that we know there is no evidence for F alse by the very meaning of a “false” assertion, namely that there is no evidence for it just as there is no evidence in the empty type because there is nothing in the empty type. We give the empty type

5 a name False. It is good to have a single character for this to keep expressions compact. So we also denote the empty type by ⊥. We call this symbol “bottom.” For symmetry, we introduce a type for True. It is simply a type with one element; we use ? for that element.

With these definitions, we say that a proposition P is false if we have evidence f for (P ⇒⊥). The explanation is that if we had evidence p for P , then we could use f to compute evidence f(p) for ⊥, and there is no such evidence. So there can be no evidence for P .

We notice a subtlety in dealing with ⊥, the empty type. We accept the logical rule that if we have an assumption that we know cannot have evidence, e.g. say we assume ⊥, then we are allowed to conclude anything. The logical rule is ⊥⇒ A for any propositions A. In this simple context we can easily prove ¬¬ ⊥⇒⊥ . But when we try to generalize this method to show ¬¬P ⇒ P, the proof fails, and we see that there can be no evidence for this simple conjecture. Can we explain intuitively why this simple proposition is not provable? This calls for the ability to show that there is no computation of a certain type. Next we discus a logical way of doing that.

Disjunction: The next most complex logical construct is the disjunction, “P or Q,” symboli- cally, P ∨ Q. Evidence for this kind of proposition must tell us two things: for which disjunct, P or Q, do we have evidence; and what is that evidence. To accomplish the first, we use a tag on the evidence. It could be a tag such as “left” for the P disjunct and “right” for Q. Instead we use notation that has become standard, inl and inr standing for “inject into the left side” and “inject into the right side.” The objects being injected are the evidence for the proposition selected, say inl(p) for P and inr(q) for Q. The type of this kind of evidence is called a disjoint union type, and it is denoted [P ] + [Q].4

Implication: Implication is associated with our ability to compute, to take evidence in one form and convert it to evidence in another form. This ability is one of our mental tools for making constructions. From this ability we come to understand an important constructive type, the type of effective operations taking evidence of type S and converting it to evidence of type T. We denote this type of all the effective operations from type [S] to type [T ] as [S] → [T ]. We will use the lambda calculus notation for effective operations.5 But first we offer a simpler notation familiar from any mathematics course, using a function name such as f and writing an equation to define it, such as f(x) = x for the identity function, f(x) = 0 for a constant function, add(< x, y >) = x + y for addition, and so forth.

Two researchers, Saul Kripke [59] and Evert Beth [7, 9], developed methods of showing that certain logical expressions have no evidence. Underwood [72] formally proved some of Kripke’s

4This notion is the dual of “twoity,” and the human mind grasps this duality as a general concept for how twoity can be used, either to combine or to separate. 5This notation for operations is due to Church [24] who simplified the type theory of Russell and of Russell and Whitehead in Principia Mathematica [67, 76].

6 results in Nuprl.

The logic we just discussed is often called intuitionistic Propositional Logic, abbreviated iPC. Its computational meaning was defined by Brouwer [15] and formalized by Kleene [55] much later. The ideas were discussed in detail by early intuitionists such as Heyting [50] and Wim Veldman [75]. Later by Per Martin-L¨of[62] and Peter Aczel [1].

2.3 Distributed Reasoning

The reasoning and problem solving process can involve several people collaborating. It is inter- esting to formalize this mode of reasoning as well. Our experience in reasoning about distributed systems provides a basis for exploring this topic. Here we discuss a formal framework for explor- ing this topic and prove a suggestive result.

2.4 The computational complexity of constructive proofs

Typically it is possible to prove a theorem in several different ways. It is interesting to measure this characteristic of a proposition, say its diversity index. Moreover, these proofs might differ in their computational complexity or in their proof complexity, say how many steps are required in the proof. These measures could be used to select a preferred proof. We might have a measure of conceptual complexity that would be very informative, and that would depend on the organi- zation of a theory, choosing the order in which concepts are introduced.

Here is a simple example of this diversity. Consider this simple proposition:

Proposition 1: ((A ⇒ B) ∨ (A ⇒ C)) ⇒ (A ⇒ (B ∨ C)).

There are several ways to prove this. For example we could assume ((A ⇒ B) ∨ (A ⇒ C)) and continue the proof by cases. In both cases we prove (A ⇒ (B ∨ C)) by assuming A and using the appropriate disjunction. Alternatively we could assume A and then have the goal to prove (B ∨ C). We can do this by using A and (A ⇒ (B ∨ C)) from which the conclusion follows immediately.

An interesting question to consider is how many different proofs are there for this theorem. An- other good question is to ask which of these two possible proofs is “most efficient.” To consider the efficiency question we would need to have a measure of the computational complexity of the proofs we can build. This would require that we create a complexity measure for the functional

7 programs that arise from our proofs.

2.5 Motivations for Beth and Kripke semantics

Beth and Kripke semantics give us a way to rigorously establish that a proposition expressible in iPC is not provable. If the proposition cannot be proved, then we know there is no program for it that uses only the realizers for the propositional operators. This method provides a class of problems that are unsolvable, just as we know that it is not possible to solve the halting problem for Turing machines or any equivalently universal programming formalism. We will see in the next subsection that we cannot achieve this kind of result for programs arising from intuitionistic First-Order Logic (iFOL). Church’s theorem tells us that iFOL is not decidable [23].

2.6 First-Order Logic

Since Frege [42] First-Order Logic, FOL, is the formalism used to axiomatize fundamental math- ematical theories. The key features of FOL are the two quantifiers. The universal quantifier asserts that a property holds for all elements of a type or a set. The standard symbolic form is this for quantifying over a type or set T to assert that a property P (x) holds for all elements of a type or set T . We write this as ∀x : T.P (x). To assert that at least one element of the set or type T exists and that the property P is true of it, we use the existential quantifier, we write ∃x : T.P (x). There is an intuitionistic version of FOL that we call iFOL. In iFOL, the logical connectives are defined intuitionistically. The existential quantifier of iFOL requires that the witness of this quantifier is an object we can actually construct. We use the same syntax as for FOL, but the meaning of the logical operators in iFOL is intuitionistic. So to know ∀x : T.P (x) we need to create or exhibit a function f that takes any element t of T and produces f(t), and produces evidence for P (f(t)).

Classical first-order logic, FOL, is used to formalize such theories as the called Peano Arithmetic, PA [65]. There is an intuitionistic version based on iFOL called Heyting Arith- metic, HA [50]. For classical real analysis, Real Analysis, RA, the classical axiomatizations are more delicate and can go beyond FOL. For intuitionistic analysis, (INT), the axiomatizations are also more delicate. For classical set theory, FOL is the standard logic, and the standard theory is Zermelo-Fraenkel set theory with the of Choice, (ZFC) [6, 63]. There is an intuitionistic set theory, IZF, based on iFOL [25].

We can see from the above summary of classical theories and their intuitionistic or constructive counterparts that it is important to define an intuitionistic version of FOL, say iFOL, and define the corresponding constructive or fully intuitionistic theories.

8 3 From Proof Assistants to Thought Assistants

Computer science is making considerable progress in providing tools that assist mathematicians in solving open problems and discovering new ones. Proof assistants are a critical technology in this effort. That was well understood by the Fields Medalist Vladimir Voevodsky who turned to computer science to help validate his ideas in homotopy theory. In particular he sought help in formally proving what he called his Univalence Axiom. That result was established at Cornell by Mark Bickford using the Nuprl proof assistant [10, 11] with help from Voevodsky who visited Cornell to work with Dr Bickford on this question. The PRL research group has been enriching and applying the Nuprl proof assistant since 1985 [29], for over thirty four years with at least 30 PhD students working on it along with a strong technical support staff who continue to enhance and upgrade Nuprl.

Proof assistants help researchers accomplish some of the most conceptually difficult tasks arising in mathematics and science, especially in computer science. Nuprl has been used to formally define these tasks and to help accomplish them. Researchers have used Nuprl to solve open prob- lems in computing theory and mathematics and to create provably correct versions of some of the most conceptually complex algorithms needed in practice, e.g. the Paxos consensus protocol [64].

Some key references: [29, 32, 33, 31, 2, 51, 50, 73, 67, 66, 76, 9, 8, 7, 59, 15, 19, 18, 16, 17, 20, 14, 48, 49, 13, 21, 12, 36, 37, 38, 40, 41, 43, 44, 3, 22, 27, 28, 58].

References

[1] Peter Aczel. Notes on constructive type theory, May 1997. Unpublished manuscript.

[2] Aristotle. Organon. Approx. 350 BCE.

[3] William Y. Arms. Digital Libraries. MIT Press, Cambridge, Massachusetts, 2000.

[4] Michael J. Beeson. Foundations of Constructive Euclidean Geometry. 2009.

[5] Michael J. Beeson. A constructive version of Tarski’s geometry. Annals of Pure and Applied Logic, 2015.

[6] Paul Bernays and A. A. Fraenkel. Axiomatic Set Theory. North-Holland Publishing Com- pany, Amsterdam, 1958.

9 [7] Evert W. Beth. Semantical considerations on intuitionistic mathematics. Indagationes mathematicae, 9:572 – 577, 1947.

[8] Evert W. Beth. Semantic construction of intuitionistic logic. Koninklijke Nederlandse Akademie van Wetenschappen, Mededelingen, Nieuwe Reeks, 19 (11):357 – 388, 1957.

[9] Evert W. Beth. The Foundations of Mathematics. North-Holland, Amsterdam, 1959.

[10] Mark Bickford. Formalizing and presheaf models of type theory in Nuprl. arXiv e-prints, 2018.

[11] Mark Bickford, Fedor Bogomolov, and Yuri Tschinkel. Vladimir Voevodsky – Work and Destiny. EMS Newsletter December 2017, pages 30–31, 2017.

[12] E. Bishop. Foundations of Constructive Analysis. McGraw Hill, NY, 1967.

[13] E. Bishop and D. Bridges. Constructive Analysis. Springer, New York, 1985.

[14] L.E.J. Brouwer. Over de grondslagen der wiskunde (On the foundations of mathematics). PhD thesis, Amsterdam and Leipzig, 1907.

[15] L.E.J. Brouwer. Intuitionism and formalism. Bull Amer. Math. Soc., 20(2):81–96, 1913.

[16] L.E.J. Brouwer. Uber¨ definitionsbereiche von funktionen. Mathematische Annalen, 97:60– 75, 1927.

[17] L.E.J Brouwer. Contradictority of elementary geometry. In Aren Heyting, editor, L.E.J Brouwer, Collected Works, pages 497–498. North Holland, 1975.

[18] L.E.J. Brouwer. Brouwer’s Cambridge lectures on intuitionism. Cambridge University Press, 1981.

[19] L.E.J. Brouwer. Intuitionism and formalism. In P. Benacerraf and H. Putnam, editors, Philosophy of mathematics: selected writings. Cambridge University Press, 1983.

[20] L.E.J. Brouwer and A. Heyting. Collected Works: Philosophy and foundations of mathe- matics, edited by A. Heyting. Collected Works. North-Holland Pub. Co., 1976.

[21] W. Buchholtz et al. Iterated inductive definitions and subsystems of analysis. Recent Proof- Theoretical Studies, 897:16–77, 1981.

[22] James Caldwell. Logical aspects of digital mathematics libraries. Talk given at the First International Workshop on Mathematical Knowledge Management, Schloss Hagenberg, September 2001.

[23] Alonzo Church. A note on the . Journal of Symbolic Logic, 1:40 – 41, 1936.

[24] Alonzo Church. A formulation of the simple theory of types. The Journal of Symbolic Logic, 5:55–68, 1940.

10 [25] Robert Constable and Wojciech Moczydlowski. Extracting programs from constructive HOL proofs via izf set-theoretic semantics. In Proceeding of 3rd International Joint Conference on Automated Reasoning (IJCAR 2006), LNCS 4130, pages 162–176. Springer, New York, 2006.

[26] Robert L. Constable. The semantics of evidence (also appeared as Assigning Meaning to Proofs). Constructive Methods of Computing Science, F55:63–91, 1989.

[27] Robert L. Constable. Building interactive libraries of formal algorithmic knowledge. Invited talk given at the MURI Kickoff Meeting, Arlington, Virginia, July 2001.

[28] Robert L. Constable. Logical foundations for digital libraries of formal mathematics. Invited talk given to the Association for Symbolic Logic, Muenster, Germany, August 2002.

[29] Robert L. Constable, Stuart F. Allen, H. M. Bromley, W. R. Cleaveland, J. F. Cremer, R. W. Harper, Douglas J. Howe, T. B. Knoblock, N. P. Mendler, P. Panangaden, James T. Sasaki, and Scott F. Smith. Implementing Mathematics with the Nuprl Proof Development System. Prentice-Hall, NJ, 1986.

[30] S. A. Cook. The complexity of theorem proving procedures. In Proceedings of the 3rd ACM Symposium on Theory of Computing, pages 151–158. ACM, New York, 1971.

[31] Michael Dummett. The philosophical basis of intuitionistic logic. In H.E. Rose J. Shepherd- son, editor, Logic Colloquium ’73, pages 5–40, Amsterdam, 1975. North Holland.

[32] Michael Dummett. Frege Philosophy of Mathematics. Harvard University Press, Cambridge, MA, 1991.

[33] Michael Dummett. Elements of intuitionism. Oxford logic guides 39. Clarendon Press, 2nd edition, 2000.

[34] George B. Dyson. Darwin Among the Machines: The Evolution of Global Intelligence. Perseus Books, 1997.

[35] Euclid. Elements. Dover, approx 300 BCE. Translated by Sir Thomas L. Heath.

[36] Solomon Feferman. Formal theories for transfinite iterations of generalized inductive defi- nitions and some subsystems of analysis. In Proceedings of the Conference on Intuitionism and Proof Theory, pages 303–326, North-Holland, Buffalo, NY, 1970.

[37] Solomon Feferman. A language and for explicit mathematics. In J. N. Crossley, edi- tor, and Logic, volume 480 of Lecture Notes in Mathematics, pages 87–139. Springer, Berlin, 1975.

[38] Solomon Feferman. Predicativity. In Stewart Shapiro, editor, The Oxford Handbook of Philosophy of Mathematics and Logic, pages 590–624. Oxford University Press, Oxford, 2005.

[39] Solomon Feferman et al., editors. Kurt G¨odelCollected Works, volume 1. Oxford University Press, Oxford, Clarendon Press, New York, 1986.

11 [40] M. Fitting. Intuitionistic model theory and forcing. North-Holland, Amsterdam, 1969.

[41] R. W. Floyd. Assigning meanings to programs. In Proceedings AMS Symposium Appl. Math., pages 19–31, Providence, RI, 1967.

[42] Gottlob Frege. Begriffsschrift, a formula language, modeled upon that for arithmetic for pure thought. In van Heijenoort [74], pages 1–82.

[43] Harvey Friedman. Set theoretic foundations for constructive analysis. Annals of Math, 105:1–28, 1977.

[44] Harvey Friedman. Classically and intuitionistically provably recursive functions. In D. S. Scott and G. H. Muller, editors, Higher Set Theory, volume 699 of Lecture Notes in Mathe- matics, pages 21–28. Springer-Verlag, 1978.

[45] J. H. Gallier. Logic for Computer Science, Foundations of Automatic Theorem Proving. Harper and Row, NY, 1986.

[46] Scott Galloway. The Four. Corgi Books, London, 2017.

[47] Michael R. Garey and David S. Johnson. Computer and Intractability. W.H. Freeman and Company, San Francisco, 1979.

[48] R. L. Goodstein. Recursive Number Theory. North-Holland, Amsterdam, 1957.

[49] R. L. Goodstein. Recursive Analysis. North-Holland, Amsterdam, 1961.

[50] A. Heyting. Intuitionism, An Introduction. North-Holland, Amsterdam, 1966.

[51] Arend Heyting. Mathematische Grundlagenforschung, Intuitionismus, Beweistheorie. Springer, Berlin, 1934.

[52] . The foundations of mathematics. In van Heijenoort [74], pages 464–479. reprint of 1928 paper.

[53] David Hilbert. Foundations of Geometry. Open Court Publishing, 1992.

[54] Ariel Kellison, Mark Bickford, and Robert Constable. Implementing Euclid’s straightedge and compass constructions in type theory. Annals of Mathematics and Artificial Intelligence, Sep 2018.

[55] S. C. Kleene. Introduction to Metamathematics. D. Van Nostrand, Princeton, 1952.

[56] S. C. Kleene. Introduction to Metamathematics. North-Holland, Amsterdam, 2005.

[57] S. C. Kleene and R. E. Vesley. Foundations of Intuitionistic Mathematics. North-Holland, Amsterdam, 1965.

[58] Morris Kline. Mathematical Thought from Ancient to Modern Times. Oxford University Press, 1972.

12 [59] S.A. Kripke. Semantical analysis of intuitionistic logic. In J.N. Crossley M.A.E. Dummett, editor, Formal Systems and Recursive Functions, pages 92–130. North-Holland, Amsterdam, 1965.

[60] L. A. Levin. Universal search problems. Problemy Peredaci Informacii 9, pages 115–116, 1973. (in Russian). English Translation in Problems of Information Transmission 9, pages 265–266, 1973.

[61] Robert Constable Mark Bickford, Vincent Rahli. The Good, the Bad and the Ugly. In Thirty-Second Annual ACM/IEEE Symposium on Logic in Computer Science, pages 266– 279, 2017.

[62] Per Martin-L¨of. Notes on Constructive Mathematics. Almqvist & Wiksell, Uppsala, 1970.

[63] J. Myhill. Some properties of intuitionistic Zermelo-Fraenkel set theory. In Cambridge Sum- mer School in Mathematical Logic, volume 337 of Springer Lecture Notes in Mathematics, pages 206–231. Springer-Verlag, 1973.

[64] Robbert Van Renesse Mark Bickford Robert L. Constable Nicolas Schiper, Vincent Rahli. Shadowdb: A replicated database on a synthesized consensus core. In Eighth Workshop on Hot Topics in System Dependability, HotDep, 2012.

[65] G. Peano. Selected Works. University of Toronto Press, Toronto, 1973. Edited by H.C. Kennedy.

[66] Bertrand Russell. Mathematical logic as based on a theory of types. Am. J. Math., 30:222– 62, 1908.

[67] Bertrand Russell. The Principles of Mathematics. Cambridge University Press, Cambridge, 1908.

[68] W. Schwabh¨auser,, and . Metamathematische Methoden in der Geometrie. Springer Verlag, Berlin, 1983.

[69] Bart Selman. Compute-intensive methods in artificial intelligence. Annals of Mathematics and Artificial Intelligence, 28:35–41, 2000.

[70] A. Tarski. A Decision Method for Elementary Algebra and Geometry. University of California Press, Berkeley, 1951.

[71] Alfred Tarski and Steven Givant. Tarski’s system of geometry. Bulletin of Symbolic Logic, 5(2):175–214, 1999.

[72] Judith L. Underwood. A constructive proof for the intuitionistic propositional calculus. Technical Report 90–1179, Cornell University, 1990.

[73] Dirk van Dalen. L.E.J. Brouwer. Springer, 2013.

[74] J. van Heijenoort, editor. From Frege to G¨odel: A Source Book in Mathematical Logic, 1879–1931. Harvard University Press, Cambridge, MA, 1967.

13 [75] W. Veldman. An intuitionistic completeness theorem for intuitionistic predicate calculus. Journal of Symbolic Logic, 41:159–166, 1976.

[76] A.N. Whitehead and B. Russell. Principia Mathematica, volume One, Two, Three. Cam- bridge University Press, 2nd edition, 1925–27.

14