2021-4180-AJS

1 The Fixpoint Combinator in Combinatory Logic – 2 A Step towards Autonomous Real-time Testing of 3 Software? 4 5 Combinatory Logic is an elegant and powerful logical theory that is used 6 in as a theoretical model of computation. Its algebraic 7 structure supports self-application and is Turing-complete. However, 8 contrary to , it untangles the problem of substitution, 9 because bound variables are eliminated by inserting specific terms called 10 Combinators. It was introduced by Moses Schönfinkel (Schönfinkel, 11 1924) and (Curry, 1930). Combinatory Logic uses just one 12 algebraic operation, namely combining two terms, yielding another valid 13 term of Combinatory Logic. Terms in models of Combinatory Logic look 14 like some sort of assembly language for . A Neural Al- 15 gebra, modeling the way we think, constitutes an interesting model of 16 Combinatory Logic. There are other models, also based on the Graph 17 Model (Engeler, 1981), such as software testing. This paper investigates 18 what Combinatory Logic contributes to modern software testing. 19 20 Keywords: combinatory logic, combinatory algebra, autonomous real- 21 time testing, , software testing, artificial intelligence 22 23 24 The Organon 25 26 Aristotle’s legacy regarding formal logic has been transferred to us in a 27 collection of his thoughts compiled into a set of six books called the Orga- 28 non around 40 BCE by Andronicus of Rhodes or others among his followers 29 (Aristoteles, 367-344 BCE). The Organon with its syllogisms was the domi- 30 nant form of Western logic until 19th-century advances in mathematical log- 31 ic. 32 Engeler recently noted the apparent lack of something that we today 33 consider fundamental for axiomatic geometry: relations. The question is 34 why. Aristotle had the means of developing this concept as well; however, 35 he chose not to do so. 36 Aristotle had the means of combining predicates. It is therefore possible 37 to construct an adequate model for Aristotle’ syllogism based on the struc- 38 tures of Combinatory Logic. Relations then become part of the model. 39 Engeler shows that Aristotle had no need for relations because the main 40 model he used – the Euclidean Geometry – does not require relations 41 (Engeler, 2020). 42 43 44 Introduction 45 46 A model of Combinatory Logic is an algebraic structure implementing 47 combinators in a non-trivial way. Such a model is called Combinatory Alge-

1 2021-4180-AJS

1 bra. As a minimum, it contains implementations for the 퐈 combinator (iden- 2 tity), the 퐊 combinator for extracting parts of another term, and the 퐒 com- 3 binator that substitutes parts of a term by some other combinator. Another 4 famous combinator is the Fixpoint Combinator 퐘, explaining recursion and 5 possible infinite iteration. These specific combinators are represented as 6 Constants in the language of Combinatory Logic, whereas other terms may 7 contain Variables. We refer to these general terms as Combinatory Terms 8 (Bimbó, 2012, p. 2). 9 Given a problem as a term 푋 in some suitable model, what should be its 10 solution? A problem is something that displays specific behavior, some- 11 times unpredictable, and produces specific effects, often unwanted. Also, a 12 certain persistence is part of a problem; problems that disappear by them- 13 selves are not particularly exciting. 14 A fixpoint point 퐘푋 with the property that 퐘푋 = 푋(퐘푋), for any 15 term 푋 of Combinatory Logic, is thus something like a solution to the prob- 16 lem. You can apply that solution as many times as deemed necessary and 17 the problem remains stable and confined. 18 When we encounter the problem of how to test a piece of software 푋, 19 and we have a test suite 퐘푋 with the fixpoint property, it looks like a solu- 20 tion to our testing problem. Since we can measure tests, by counting its test 21 size, we can assess what means minimal effort for a test, and thus can get an 22 optimum. 23 The clue to Combinatory Logic is that “everything is a function” – and 24 indeed, a unary function. Whenever anything can be understood as function 25 depending on two variables – 푓(푥, 푦) – it is an application of a unary func- 26 tion 푔 = 푓푥 on a variable 푦. Thus, 푓(푥, 푦) = 푔(푦) = (푓푥)푦 = 푓푥푦; 27 always assuming association to the left. This is known as Currying, convert- 28 ing n-ary functions into a sequence of unary functions. 29 30 31 Combinatory Algebras 32 33 Combinatory Algebras are models of Combinatory Logic (Curry & 34 Feys, 1958) and (Curry, et al., 1972). Such algebras are closed under a com- 35 bination operation 푀푁 for all terms of the algebra 푀, 푁; and two distinct 36 Combinators 퐒 and 퐊 can be defined with the following properties: 37 퐊푃푄 = 푃 (1) 38 39 and 40 퐒푃푄푅 = 푃푄(푃푅) (2) 41 where 푃, 푄, 푅 are elements in the combinatory algebra1.

1The use of variables named 푃, 푄, 푅 is borrowed from Engeler (Engeler, 2020).

2 2021-4180-AJS

1 Thus, the combinator 퐊 acts as projection, and 퐒 is a substitution opera- 2 tor for terms in the combinatory algebra. Like an assembly language, the 퐒- 3 퐊 terms become quite lengthy and are barely readable by humans, but they 4 work fine as a foundation for computer science. 5 The power of these two operators is best understood when we use them 6 to define further, more manageable and more reasonable combinators. 7 Church (Church, 1941) presented a list of functions that can be implemented 8 as combinators, and Zachos investigated them in the settings of Combinato- 9 ry Logic (Zachos, 1978). We present here only a few of them. 10 11 Identity 12 13 The identity combinator is defined as 퐈 ≔ 퐒퐊퐊 (3)

14 Indeed, 퐈푀 = 퐒퐊퐊푀 = 퐊푀(퐊푀) = 푀. Association is to the left. 15 16 Functionality by the Lambda Combinator 17 18 Church’s Lambda Calculus is a formal language that can be understood 19 as a prototype programming language, see (Church, 1941), (Barendregt, 20 1977). 21 Lambda calculus can be expressed by 퐒-퐊 terms. We define recursively 22 the Lambda Combinator 퐋퐱 for a variable 푥 as follows: 23 퐋퐱푥 = 퐈 (4) 퐋퐱푀 = 퐊푀 if 푀 different from 퐱 (5) 퐋퐱푀푁 = 퐒퐋퐱푀(퐋퐱푁) (6) 24 25 The definition (5) holds for any variable term 퐱 in the combinatory al- 26 gebra. We can extend the definition of the Lambda combinator by getting 27 rid of the specific variable 푥. For any combinatory term 푀, the Abstraction 28 Operator 휆푥. is defined on 푀 recursively by applying 퐋퐱 to all sub-terms of 29 푀. Applying 휆푥. 푀 to any other combinatory term 푁 replaces all occurrenc- 30 es of the variable 푥 in the term 푀 by 푁 and is written as (휆푥. 푀)푁. 31 The abstraction operator binds weaker than the combination operator. 32 Thus, 휆푥. binds all variables 푥 in 푀푁, such that we can omit parentheses as 33 in 휆푥. 푀푁 = 휆푥. (푀푁). Lambda abstraction provides a much more read- 34 able and intuitively understandable notation for terms of Combinatory Log- 35 ic. 36 37 The Fixpoint Combinator 38 39 Given any combinatory term 푍, the Fixpoint Combinator 퐘 generates a 40 combinatory term 퐘푍, called Fixpoint of 푍, that fulfills 퐘푍 = 푍 (퐘푍). 41 This means that 푍 can be applied as many times as wanted to its fixpoint 42 and still yields back the same combinatory term.

3 2021-4180-AJS

1 In linear algebra, such fixpoint combinators yield an eigenvector solu- 2 tion to some problem 푍; for instance, when solving a linear matrix. It is 3 therefore tempting to say, that 퐘푍 is a solution for the problem 푍. For more 4 details, consult Fehlmann (Fehlmann, 2016). 5 Using Lambda Calculus notation, the fixpoint combinator can be writ- 6 ten as 퐘: = 휆푓. (휆푥. 푓(푥푥))(휆푥. 푓(푥푥)) (7) 7 8 For a reference see Barendregt (Barendregt, 1984), 9 Translating (7) into an 퐒-퐊 term proves possible but becomes a bit 10 lengthy. 11 By applying (6), (5): 휆푥. 푓(푥푥) = 퐒휆푥. 푓휆푥. 푥푥 휆푥. 푓 = 퐊푓 12 Then applying (6) and (4): 휆푥. 푥푥 = (퐒휆푥. 푥휆푥. 푥) = (퐒퐈퐈) 13 this yields 휆푥. 푓(푥푥) = 퐒(퐊푓)(퐒퐈퐈) 14 and therefore 15 퐘 = 휆푓. (휆푥. 푓(푥푥))(휆푥. 푓(푥푥)) = 휆푓. (퐒(퐊푓)(퐒퐈퐈))(퐒(퐊푓)(퐒퐈퐈)) = 퐒(휆푓. 퐒(퐊푓)(퐒퐈퐈))(휆푓. 퐒(퐊푓)(퐒퐈퐈)) = 퐒(퐒(휆푓. 퐒(퐊푓))휆푓. 퐒퐈퐈)(퐒(휆푓. 퐒(퐊푓))휆푓. 퐒퐈퐈) 16 17 Now solving the remaining 휆-terms: 휆푓. 퐒(퐊푓) = 퐒휆푓. 퐒휆푓. 퐊푓 = 퐒(퐊퐒)(퐒휆푓. 퐊휆푓. 푓) = 퐒(퐊퐒)(퐒(퐊퐊)퐈) 18 considering 휆푓. 퐒 = 퐊퐒 19 and 휆푓. 퐊푓 = 퐒휆푓. 퐊휆푓. 푓 = 퐒(퐊퐊)퐈. 20 21 The latter holds by first applying (6), then (5) and (4). Moreover 22 휆푓. 퐒퐈퐈 = 퐒휆푓. 푺퐈휆푓. 퐈 = 퐒(퐒휆푓. 퐒휆푓. 퐈)(퐊퐈) = 퐒(퐒(퐊퐒)(퐊퐈))(퐊퐈) 23 applying again (6) and (5). 24 Putting things together: 25 퐘 = 휆푓. (휆푥. 푓(푥푥))(휆푥. 푓(푥푥)) = 퐒 (퐒(퐒(퐊퐒)(퐒(퐊퐊)퐈)) (퐒(퐒(퐊퐒)(퐊퐈))(퐊퐈)))

4 2021-4180-AJS

 (퐒(퐒(퐊퐒)(퐒(퐊퐊)퐈)) (퐒(퐒(퐊퐒)(퐊퐈))(퐊퐈))) 1 2 Applying 퐘 to any combinatory term 푍 now explicitly transports 푍 on 3 the top of the formula and keeps the rest of the structure of 퐘 such that 퐘 can 4 be applied repeatedly. 5 This exercise should give the reader an impression how Combinatory 6 Logic works. 7 Applying the fixpoint combinator 퐘 to some combinator Z in the 8 Lambda-style is much simpler: 9 휆푓. (휆푥. 푓(푥푥))(휆푥. 푓(푥푥))푍 = (휆푥. 푍(푥푥))(휆푥. 푍(푥푥)) = 푍(휆푥. 푍(푥푥))(휆푥. 푍(푥푥)) 10 11 by applying the Lambda combinator twice, replacing the two 푥푥 twice by 12 휆푥. 푍(푥푥). Thus, this explains reasoning as a repeated substitution pro- 13 cess. 14 There are simpler forms of the Fixpoint Combinator, e.g., 15 퐘′ = 퐒(퐊(퐒퐈퐈)) (퐒(퐒(퐊퐒)퐊)(퐊(퐒퐈퐈))) (8) 16 Proof is left to the reader. 17 18 When applying 퐘, or 퐘′, or any other equivalent fixpoint combinator to 19 a combinatory term Z, reducing the term by repeatedly using rule (1) or (2) 20 does not always terminate. An infinite loop can occur, and must sometimes 21 occur, otherwise we would always find a solution to any problem that can 22 be stated within a programming language. Thus, Turing would be wrong 23 and all finite state machines would reach a finishing state (Turing, 1937). 24 Thus, the fixpoint combinator is not the solution of all our practical 25 problems. But Engeler teaches us in his Neural Algebra fixpoints can be 26 approximated using a Construction Operator (Engeler, 2019), see below. 27 For more details about the foundations of Mathematical Logic, see for 28 instance Potter (Potter, 2004). For more combinators in Combinatory Logic, 29 see e.g., Zachos (Zachos, 1978) and Bimbó (Bimbó, 2012). 30 31 32 Arrow Terms 33 34 The Graph Model of Combinatory Logic (Engeler, 1995) is a model of 35 Combinatory Logic with explains how to combine topics in areas of 36 knowledge. Combination is not only on the basic level possible; you can 37 also explain how to combine topics on the second level; sometimes called 38 meta-level. Intuitively, we would expect such a meta-level describing 39 knowledge about how to deal with different knowledge areas.

5 2021-4180-AJS

1 Whenever two terms 푀 and 푁 are embodied in a combinatory algebra, 2 the application of 푀 onto 푁 is also a term of this combinatory algebra, de- 3 noted as 푀 • 푁. 4 Let ℒ be the set of all assertions over a given domain. Examples include 5 statements about customer’s needs, solution characteristics, methods used, 6 program states, test conditions, etc. These statements are assertions about 7 the domain we are dealing with. This could be a business domain, or the 8 state of some software, i.e., the description of the values for all controls and 9 data. 10 An Arrow Term is recursively defined as follows: 11 12 o Every element of ℒ is an arrow term. 13 o Let 푎1, … , 푎푚, 푏 be arrow terms. Then {푎1, … , 푎푚} → 푏 (9) 14 15 is also an arrow term. Thus, arrow terms are relations between finite 16 of arrow terms and another arrow term, emphasized as successor. 17 For instance, in software testing, we use arrow terms to represent test 18 cases. On the base level, the left-hand sides 푎1, … , 푎푚 represent test data, 19 the term 푏 is the known expected response of the test case (9). Higher levels 20 of arrow terms represent test strategies and tests of tests. 21 The left-hand side of an arrow term is a finite set of arrow terms and the 22 right-hand side is a single arrow term. This definition is recursive. The ar- 23 rows are a formal graph notation; they might suggest cause-effect, not logi- 24 cal imply. 25 26 The Graph Model as an Algebra of Arrow Terms 27 28 We can extend the definition of arrow terms to become a combinatory 29 algebra, allowing for the combination of arrow terms. 30 Denote by 풢(ℒ) the power set containing all Arrow Terms of the form 31 (9). The formal, recursive, definition of the Graph Model as a power set, in 32 set-theoretical language, is given in equation (10): 33 풢0(ℒ) = ℒ 풢푛+1(ℒ) = (10) 풢푛(ℒ) ∪ {{푎1, … , 푎푚} → 푏|푎1, … , 푎푚, 푏 ∈ 퐺푛(ℒ), 푚 ∈ ℕ} 34 35 for 푛 = 0, 1, 2, … 풢(ℒ) is the set of all (finite and infinite) subsets of the 36 union of all 풢푛(ℒ): 풢(ℒ) = ⋃ 풢푛(ℒ) (11) 푛∈ℕ 37 38 The elements of 풢푛(ℒ) are arrow terms of level 푛. Terms of level 0 are 39 Assertions, terms of level 1 Rules. A set of rules is called Rule Set 40 (Fehlmann, 2016). In general, a rule set is a finite set of arrow terms. We

6 2021-4180-AJS

1 call infinite rule sets a Knowledge Base. Hence, knowledge is a potentially 2 unlimited collection of rules sets containing rules about assertions regarding 3 our domain. 4 5 Combining Knowledge Bases 6 7 We can combine knowledge bases sets as follows: 푀 • 푁 = {푐|∃{푏1, 푏2, … , 푏푚} → 푐 ∈ 푀; 푏푖 ∈ 푁} (12) 8 9 Arrow Term Notation 10 11 To avoid the many set-theoretical parenthesis, the following notations, 12 called Arrow Schemes, are applied: 13 14 o 푎푖 for a finite set of arrow terms, 푖 denoting some finite indexing func- 15 tion for arrow terms. 16 o 푎1 for a singleton set of arrow terms: 푎1 = {푎} for an arrow term 푎. 17 o ∅ for the empty set, such as in the arrow term ∅ → 푎. 18 o 푎푖 ∪ 푏푗 for the union of two sets 푎푖 and 푏푗 of arrow terms. 19 o (푎) for a potentially infinite set of arrow terms, where 푎 is an arrow 20 term. 21 22 Note that arrow schemes denote sets when put into outermost parenthe- 23 sis. Without an index, the set might be infinite; an index makes the set finite. 24 The indexing function cascades; thus, 푎푖,푗 denotes the union of a finite 25 number of sets of arrow schemes 푚

푎푖,푗 = 푎푖,1 ∪ 푎푖,2 ∪ … ∪ 푎푖,푗 ∪ … ∪ 푎푖,푚 = ⋃ 푎푖,푘 (13) 푘=1 26 27 In terms of these conventions, (푥푖 → 푦)푗 denotes a rule set; i.e., a non- 28 empty finite set of arrow terms, each having at least one arrow. Thus, such 29 set has level 1 or higher. Moreover, it has two selection functions, 푖 and 푗, 30 selecting a finite number of arrow terms for 푥 and 푥푖 → 푦. 31 With this notation, the application rule for 푀 and 푁 now reads: 32

푀 • 푁 = ((푏푖 → 푎) • (푏푖)) = {푎|∃푏푖 → 푎 ∈ 푀; 푏푖 ⊂ 푁} (14) 33 34 Arrow Terms – A Model of Combinatory Logic 35 36 The algebra of arrow terms is a combinatory algebra and thus a model 37 of Combinatory Logic. It is called the Graph Model. 38 The following definitions demonstrate how arrow terms implement the 39 combinators 퐒 and 퐊 fulfilling equations (1) and (2). 40 41 o 퐈 = (푎1 → 푎) is the Identification; i.e., (푎1 → 푎) • (푏) = (푏)

7 2021-4180-AJS

st 1 o 퐊 = (푎1 → ∅ → 푎) selects the 1 argument: 퐊 • (푏) • (푐) = ((푏1 → ∅ → 푏) • (푏)) • (푐) = (∅ → 푏) • (푐) = (푏) nd 2 o 퐊퐈 = (∅ → 푎1 → 푎) selects the 2 argument: 퐊퐈 • (푏) • (푐) = ((∅ → 푐1 → 푐) • (푏)) • (푐) = (푐1 → 푐) • (푐) = (푐)

3 o 퐒 = ((푎푖 → (푏푗 → 푐)) → (푑푘 → 푏)푖 → (푎푖 ∪ 푏푗,푖 → 푐)) 1 4 5 Therefore, the algebra of arrow terms is a model of Combinatory Logic. 6 The proof that the arrow terms’ definition of 퐒 fulfils equation (2) is 7 somewhat more complex. Readers interested in that proof are referred to 8 Engeler (Engeler, 1981, p. 389). With 퐒 and 퐊, an abstraction operator can 9 be constructed that builds new knowledge bases. This is the Lambda Theo- 10 rem; it is proved along the same way as Barendregt’s Lambda combinator 11 (Barendregt, 1977). See also in Fehlmann (Fehlmann, 1981, p. 37). 12 13 The Role of the Indexing Function in Arrow Terms 14 15 The arrow in the terms of the Graph Model is somewhat confusing. It is 16 easily mistaken as representing Predicate Logic; however, this must be 17 viewed with care. Interpreting the arrow as an implication in predicate logic 18 is not per se dangerous. In some sense, logical imply is a transition from 19 preconditions to conclusion and arrow terms are fine for representing them. 20 The problem is that if the left-hand side of an arrow term, which is an oth- 21 erwise unstructured set, is interpreted as a conjunction of predicates – a se- 22 quence of logical AND-clauses – you run into a conflict with the undecida- 23 bility of first-order logic. Arrow terms would then reduce to either of the 24 form 푎1 → 푏 or ∅ → 푏. This reduces the model to become the trivial one. 25 As an example, see Bimbó (Bimbó, 2012, p. 237ff). There she explains 26 how typed Combinatory Logic gets around the triviality problem. Instead of 27 the indexing functions, selecting finite sets of arrow terms on the left-hand 28 side, she postulates proofs for the predicates. 29 Thus, the indexing function for selecting elements of a finite set of ar- 30 row terms is a key element of the Graph Model. Interested readers will find 31 related considerations in last years’ paper of the authors (Fehlmann & 32 Kranich, May 2020). For the application of the Graph Model to testing, the 33 indexing function means selection of test cases and test data, and this is al- 34 ways a collection of program state predicates that do typically not leave the 35 program under test in a consistent state. 36 37 38 Neural Algebra 39 40 Engeler uses the Graph Model as a model how the brain thinks 41 (Engeler, 2019). A directed graph, together with a firing law at all its nodes, 42 constitutes the connective basis of the brain model 풜. The model itself is 43 built on this basis by identifying brain functions with parts of the firing his-

8 2021-4180-AJS

1 tory. Its elements may be visualized as a directed graph, whose nodes indi- 2 cate the firing of a neuron. As before, we consider 풢(풜), constructed as in 3 (11). The elements of 풢(풜) are called Cascades. Cascades describe firing 4 between nodes (neurons) when represented by finite sets of arrow terms 5 푎푖 → 푏 where 푎푖 are sub-cascades, while the right sub-cascade 푏 describes 6 the characteristic leave of its firing history graph. The Neural Algebra is 7 defined as a collection of cascades representing brain functions in the brain 8 model, closed under applications and union. With the application rule (14), 9 we have an algebraic structure; the application representing brain functions, 10 interpreted as thoughts. 11 12 The Fixpoint Combinator in the Neural Algebra 13 14 The fixpoint combinator 퐘 can be written as an arrow scheme; however, 15 this calculation is better left to some suitable rewriting tool, as otherwise 16 this article would exceed all reasonable length. Applying 퐘 to an arbitrary 17 arrow scheme might result in an infinite loop of arrow schemes, represent- 18 ing a never-ending computation. Combinatory Logic, as any kind of pro- 19 gramming, may result in an infinite loop in its model, and it is not decidable 20 when this happens. 21 If infinite loops occur, or infinite sequences of digits like for real num- 22 bers that are not rationales, we need the notion of controlling operators that 23 approximate the possibly infinite solution, and metrics for measuring how 24 near the approximations to the solutions are, and get even nearer when re- 25 quired. 26 27 Reasoning, Problem Solving and Controlling 28 29 Within this setting, it is possible to define models for reasoning and 30 problem solving. However, not only flat reasoning, also for solving prob- 31 lems, even if their fixpoint is infinite. For a controlled object 푋, the Control- 32 ling Operator 퐂 solves the control problem 퐂 • 푋 = 푋. The brain function 퐂 33 gathers all faculties that may help in the solution. The control problem is a 34 repeated process of substitution, like finding the fixpoint of a combinator. 35 However, since cascades are always finite – all brain activity remains finite, 36 unfortunately – solving the control problem is by a series of finite Attrac- 37 tors, a control sequence 푋0 ⊆ 푋1 ⊆ 푋2 ⊆ ⋯ determined by 38 푋푖+1 = 퐂 • 푋푖, 푖 ∈ ℕ (15) 39 40 starting with an initial 푋0. This process is called Focusing. The details can 41 be found in Engeler (Engeler, 2019, p. 301). We will rely on the observation 42 that attractors represent reasoning in a neural algebra. 43 Attractors are ordered by inclusion (15), meaning that the solution 44 space becomes smaller and smaller until a smallest possible solution is

9 2021-4180-AJS

1 found that cannot be further reduced by the controlling operator. Eventually, 2 this ultimate solution is empty. 3 The controlling operator is closely linked to the fixpoint operator 퐘; 4 however, if 푋 has a solution 퐂 • 푋, then 푋 is of the form 푋 = 퐘 • 푍 for some 5 suitable cascade 푍. Thus, not all combinators 푋 have a solution; the related 6 control sequence may end with the empty cascade, obviously. These consid- 7 erations share a stunning resemblance with transfer functions, whose solu- 8 tion profiles are also approximations rather than precise solutions 9 (Fehlmann, 2016, p. 14). 10 The controlling operator is not alike one of the basic or the fixpoint 11 combinators but is more of a prescription, how to find suitable attractors. 12 Engeler (Engeler, 2019, p. 300ff) presents in an elegant way representations 13 of basic thought processes, e.g., reflection, discrimination, simultaneous and 14 joint control, but also learning, teaching, focusing with eyes, and compre- 15 hension. 16 Since the number of cascades that a brain can produce is finite and lim- 17 ited – by the lifespan of the brain – solution to the fixpoint control problems 18 turn out to be finite attractor sequences, characterizing thought processes. 19 20 21 Transfer Functions 22 23 For managing complex systems, transfer functions are used to analyze 24 controls and approximate the expected result (Fehlmann, 2016). 25 An obvious interpretation of arrow terms is by transfer functions. In 26 Quality Function Deployment (QFD), the building blocks – and the origin – 27 are cause-effect relations as used in Ishikawa Diagram (Ishikawa, 1990). 28 These diagrams describe the cause-effect relations between topics and are 29 considered the initial form of QFD matrices. Converting a series of Ishika- 30 wa diagrams into a QFD matrix is straightforward, see (Fehlmann, 2016, p. 31 321). Thus, transfer functions can be described by finite sets of arrow terms. 32 Deming Chains 33 Composition of transfer functions is called a Deming Chain (Fehlmann, 34 2016, p. 100) because Deming identified the value chains in manufacturing 35 processes (Deming, 1986). Akao called it Comprehensive QFD, also known 36 as QFD in the Large. He drafted extensive Deming chains in this famous 37 book on QFD (Akao, 1990). 38 For transfer functions, the Graph Model provides similar services as for 39 tests. The model proves that transfer functions have universal applicability 40 and power for explaining cause-effect, and they provide a framework for 41 automation also for Deming chains (Fehlmann, 2001). 42 43 44 The Algebra of Tests 45 46 Very interesting instantiations of the Graph Model can be found in 47 Software Testing, especially when seen from an economical viewpoint. In

10 2021-4180-AJS

1 fact, test cases are best described as arrow terms, with the left-hand sides 2 describing program states before executing the test, and the right-hand side 3 describing the response of the test case. Software testing is the key to digi- 4 talization and to software-intense products that perform safety-critical tasks. 5 Test cases are a mapping of arrow terms onto Data Movement Maps. 6 Data movement maps model the software under test by identifying the data 7 groups moved by the software, based on the ISO standard 19761 COSMIC 8 (ISO/IEC 19761, 2011). This has been explained in more detail in last 9 years’ paper (Fehlmann & Kranich, May 2020). The data movements induce 10 a sizing valuation on this algebra by counting the number of data move- 11 ments executed once per test case. 12 When we speak of test cases, we always intend a suitable data move- 13 ment map with it; thus, the same arrow term can be mapped to several data 14 movement maps, counting as separate test cases. 15 State Assertions 16 For our Test Algebra, we now assume ℒ to be the set of all state asser- 17 tions for a given program. We use the term “program” but mean a system 18 that might consist of coded software, services, or anything yielding results 19 electronically. Elements of ℒ are descriptions of the system status at a cer- 20 tain moment. In the sequel, the arrow term 푎푖 → 푏 together with its associat- 21 ed data movement map represents a test case, that, given test data 푎푖, 22 yields 푏 as the expected, correct result. 23 Engines providing machine learning and artificial intelligence are in- 24 cluded in such a “program” (Fehlmann, Jan. 2020, p. 85ff). If 푎푖 → 푏 is a 25 test case, 푎푖 ⊂ ℒ specifies a set of test data that holds before executing the 26 test, and 푏 ∈ ℒ the state of the program after execution. The finite set 푎푖 27 represents the states before execution of possible unrelated threads of the 28 program, or services involved. 29 30 Combining Tests 31 32 The definition (14) looks somewhat counter-intuitive for combining 33 tests. To apply one test case to another, it is required that the result of appli- 34 cation contains all the full test cases providing the response sought. 35 A more intuitive approach would only require test cases providing such 36 a response meeting the required controls. The existential would 37 then guarantee that there is a test case yielding such response. When accept- 38 ing the Axiom of Choice in its traditional form, that does not look like a 39 problem. However, this seemingly more intuitive approach would immedi- 40 ately lead to a contradiction to Turing’s halting problem (Turing, 1937). 41 Consult last years’ paper (Fehlmann & Kranich, May 2020) for more infor- 42 mation of this observation. 43 The intuitionistic, or constructive, variant of the Axiom of Choice re- 44 quires not only the existence of test providing valid test data as response, 45 but construction instructions for the existence of such tests, respectively the 46 related test cases. This means that it is not enough to know the existence of

11 2021-4180-AJS

1 tests, but you need to know how to construct them. This is possibly the rea- 2 son why test automation has proven to be so hard. 3 And for those who consider such reasoning too theoretical, let us pro- 4 vide a rather practical argument: programmers who want to set up test con- 5 catenation 푀푁 for automatic testing, need access to the test cases in 푁 that 6 provide the responses needed for 푀, for combining 푀 with 푁. Otherwise, 7 combining tests is unsafe or cannot be automated. Thus, with the combina- 8 tory algebra of arrow terms, mathematical logic meets intuitionism and pro- 9 gramming. 10 11 Combination Limitations 12 13 Combining tests in a Combinatory Algebra is unlimited indeed because 14 there is no typing involved that governs applicability. By (14), you can 15 combine test cases across test stories as deemed appropriate; all that counts 16 are the data movements. The only restriction is, not all combinations of test 17 cases yield an executable new test case. It must be possible to combine the 18 data movement maps associated to the test cases as well; thus, if these 19 movements do not overlap between two test cases, they do not yield a valid 20 and executable test case. 21 We therefore silently agree that combinations of tests only are consid- 22 ered if their data movement maps overlap somehow. 23 24 The Size of Tests 25 26 For a testing framework, we need to be able to measure the size of tests. 27 The standard ISO/IEC 19761 COSMIC for measuring functional size serves 28 as measuring method. The functional size of the associated data movement 0 0 0 29 map is the size of a test case, denoted by 퐶푓푝(푎푖 → 푏 ), where 푎푖 ∈ 풢0(ℒ) 0 30 and 푏 ∈ 풢0(ℒ) are arrow terms of level 0; i.e., assertions about the state of 0 0 31 the program. 퐶푓푝(푎푖 → 푏 ) is the number of unique data movements 0 0 32 touched when executing the test case 푎푖 → 푏 . This is the recursion base. 33 Then the following equations (16) recursively define the size of tests: 34 ⌊푎⌋ = 0 for 푎 ∈ 풢0(ℒ) ⌊푎푖 → 푏⌋ = 퐶푓푝(푎푖 → 푏) for 푎푖 ⊂ 풢0(ℒ) and 푏 ∈ 풢0(ℒ) (16) ⌊푐푖 → 푑⌋ = ∑⌊푐푖⌋ + ⌊푑⌋ for all test cases 푐푖 and 푑 푖 35 36 The definition holds for all arrow terms in the algebra of tests. 37 The addition does not take into consideration whether data movements 38 are unique; thus, the size of two test cases is always the sum of the sizes. 39 When speaking about tests, we do not use the term knowledge base for sets 40 of arrow terms, but rather Test Story for a set of test cases. Test stories typi- 41 cally share a common intent, or business value.

12 2021-4180-AJS

1 The Functional Size of Combinators 2 3 Applying the definition (16) to the combinators 푺, 푲, 푰, and 풀 yields an 4 infinite size for each of them, because the arrow term sets are infinite. This 5 is conformant to the observation that when expressing these combinators as 6 terms in the Lambda calculus, they are closed insofar as they do not contain 7 free variables nor constants. 8 9 Autonomous Real-time Testing 10 11 In our recent book (Fehlmann, Jan. 2020), we coined the term Autono- 12 mous Real-time Testing (ART) to describe software tests that are 13 14 o Executed automatically in a system during operations, or when paus- 15 ing operations; 16 o Started from a base test using recombination and other operations of 17 combinatory algebra by adding autonomously generated test cases; 18 o Controlled by transfer functions assuring relevance for users’ values. 19 20 In previous papers and the book about, we have explained how to keep 21 the growth of test cases under control, using the Convergence Gap as a 22 hash. The convergence gap in transfer functions measures the gap between 23 the needs – of the customer, the user, certification authority, or else – and 24 the achieved test coverage. For an example, consult last years’ paper 25 (Fehlmann & Kranich, May 2020). 26 27 Attractors 28 29 While the fixpoint combinator 풀 works as above on sets of test cases, in 30 most cases, it returns infinite tests as “solutions” – something not too practi- 31 cal. However, we can construct attractors quite like for neural algebra, ap- 32 proximating the infinite testing set, as good as we wish. This creates a new 33 problem for us, namely, to assess: when is testing good enough? 34 While good practices can provide answers – e.g., by looking at the re- 35 maining defect rate (Fehlmann & Kranich, 2014) – a more theoretical an- 36 swer should include at least the requirement that attractors cover functionali- 37 ty. This leads to the notion of Convergence Gap, explained in (Fehlmann, 38 Jan. 2020, p. 10). 39 Let 푈푙 denote a finite set of user stories, and 푇푘 another set of test sto- 40 ries, usually somewhat larger than the set of user stories. The matrix 41 푈푙 ⊗ 푇푘 maps test stories to user stories and becomes a transfer function, if 42 each cell contains the size ⌊(푎푖 → 푏)푗⌋ of all test cases (푎푖 → 푏)푗 belonging 43 to some test story 푇푘 and referring to some user story, or FUR 푈푙. This 44 yields a matrix: 45 푨 = (⌊(푎 → 푏) ⌋) 푖 푗 푙,푘 (17) 46

13 2021-4180-AJS

1 The indices of the matrix run over integers 푙, 푘 ∈ ℕ. 2 The transfer function 푨 maps test stories to user stories, and we call it 3 Test Coverage Matrix because you can assess how good test stories cover 4 user stories. 5 Let user stories be prioritized, say by some Goal Profile 풚. The goal 6 profile characterizes priorities by a unit vector in the space of the alterna- 7 tives under consideration. Then the transfer function 푨 can be applied to a 8 Solution Profile 풙, describing the importance of the test stories, and 푨풙 is 9 the result of applying 푨 to this solution profile. Obviously 푨풙 ≠ 풚; howev- 10 er, the difference ‖풙 − 풚‖ is interesting. If this difference is small, then the 11 solution profile 풙 represents an optimum selection of test stories, meaning 12 that tests cover what is relevant to the user’s goal profile. 13 Optimum solution profiles can be calculated using the eigenvector 14 method (Fehlmann, 2016, p. 34). Let 풚푨 be the Principal Eigenvector of 15 푨푨⊺, solving the eigenvalue problem (18) for some 휆 ∈ ℝ. 16 ⊺ 푨푨 풚푨 = 휆풚푨 (18) 17 18 The principal eigenvector 풚푨 is called the Achieved Profile of the trans- 19 fer function 푨. Both, 풚 and 풚푨 are Profiles. This means, their vector length 20 ‖풚‖ = 1 respectively ‖풚푨‖ = 1 are both one, where ‖… ‖ represents the 21 Euclidean Norm for vectors. The difference between a goal profile and an 22 achieved profile is called Convergence Gap: 23 퐶표푛푣푒푟푔푒푛푐푒 퐺푎푝 = ‖풚 − 풚푨‖ (19) 24 25 The convergence gap is a metric that measures how well a transfer 26 function explains the observed profile with suitable controls. The controls 27 are the test stories; the observed profile compares with the goal profile of 28 the user stories’ relevance for the user of the software or the system. Note 29 that computing the achieved profile is very often not straightforward, as it is 30 in our case where we can make use of simple linear algebra. 31 We can now construct attractors as a series 푨ퟎ, 푨ퟏ, 푨ퟐ, … of test cover- 32 age matrices that approximate the test suite that we need to cover our func- 33 tional requirements. However, the attractors must all keep the convergence 34 gap und control, meaning that for a certain 휀 > 0 and all attractors 푨푖 holds: 35

퐶표푛푣푒푟푔푒푛푐푒 퐺푎푝(푨푖) = ‖풚푖 − 풚푨풊‖ < 휀 (20) 36 37 Thus, our constructor 퐂 must construct an ascendant series 38 푨ퟎ, 푨ퟏ, 푨ퟐ, … such that both (20) and (21) holds: 39 푨 = 퐂 • 푨 , 푖 ∈ ℕ 푖+1 푖 (21) 푨풊 ⊑ 푨푖+1, 푖 ∈ ℕ 40 41 The constructor 퐂 therefore is an intelligence search in a wide range of 42 potential attractors, keeping the convergence gap small enough.

14 2021-4180-AJS

1 We call such a series of attractors Bound, by the convergence gap. In 2 fact, bound attractors constitute a formal way to solve all kind of issues 3 normally tackled by artificial intelligence. The hash functions used for 4 measuring the convergence gap, might be considerably more complicated 5 than in the case of test size. 6 7 Optimum Test Size 8 9 For a test coverage matrix 푨 = (⌊(푎 → 푏) ⌋) , the total test size of 푨 푖 푗 푙,푘 10 is ⌊푨⌋ = ∑(⌊(푎 → 푏) ⌋) , 푙, 푘 ∈ ℕ 푖 푗 푙,푘 (22) 푙,푘 11 If 푨0 ⊑ 푨1 ⊑ 푨2 ⊑ ⋯, then ⌊푨ퟎ⌋ ≤ ⌊푨ퟏ⌋ ≤ ⌊푨ퟐ⌋ ≤ ⋯ also holds. 12 13 We can use combination of tests, as well as applying any special com- 14 binator such as projection or substitution to generate new test cases. 15 Bound attractors build up a test suite by adding more tests to the test 16 coverage matrix 푨. The convergence gap must not necessarily decrease. In 17 contrary, adding more tests can spoil the convergence gap, for instance if 18 some test story gains too much weight and inflate the respective user sto- 19 ries’ achieved profile. 20 Therefore, constructing a suitable constructor 퐂 is all but simple, nor 21 straightforward, because adding more tests does not solve a problem. In 22 view of executing such tests on a machine, the number of tests must not 23 only remain finite but also limited to some manageable number. 24 For practical applications, combining unit tests from related domains 25 such as steering control of an autonomous vehicle with weather forecast is 26 always feasible to construct attractors for a system test of this autonomous 27 vehicle. However, the convergence gap enforces that such test combinations 28 cover the full range of test cases relating to both steering control, and pro- 29 cess weather forecast services; otherwise, some test stories would grow be- 30 yond limits. 31 There is an optimum number of attractors delivering enough tests to fit 32 the test intensity required by the user of the system, and test size that still 33 can be executed within a limited time frame. Computing that optimum is an 34 important task for product management, and, depending upon safety and 35 privacy criticality, must be carefully chosen to make such a product ac- 36 ceptable for the society. 37 38 39 Conclusions 40 41 Aristotle’s Mental Completion reflects his understanding of recursion as 42 a mentally completed inductive definition of a concept (Engeler, 2020). Fol- 43 lowing him, we identified constructors as a general prescription for con- 44 structing attractors that serve as approximations to solutions for problems. 45 We have shown how such constructions look in the case of the Algebra of

15 2021-4180-AJS

1 Tests, although not in this paper but in its predecessors (Fehlmann & 2 Kranich, May 2020), and in our recent book (Fehlmann, Jan. 2020). We 3 linked attractors to fixpoint operators, and all this in a very practical setting, 4 with potentially high economic impact. 5 Why are we doing theoretical stuff like logic and other basic sciences? 6 Maybe the answer is because this is the way to new business models and 7 more efficient progress in applied sciences? Probably the only sure way? 8 Because otherwise you get lost in the jungle? Without hope for finding an 9 exit. 10 So, why don’t we educate our young engineers in basic sciences? Once 11 they have mastered that, they can apply the basic findings to any applied 12 technical or scientific area they care for. 13 14 15 Open Questions 16 17 Obviously, there are more open questions than we can mention here. 18 Maybe this is a step toward the New Kind of Science that Stephen Wolfram 19 promised us in the early years of this century (Wolfram, 2002)? Is the ap- 20 proach presented in this paper potentially fruitful not only to Artificial Intel- 21 ligence, Neuroscience, and system testing? What else could we describe by 22 a constructor and by attractors? Thus, better understanding what we are do- 23 ing, and why? 24 Less philosophical is the open and quite practical question, how actual 25 constructors look for software and system testing, whether there are some 26 general rules to follow, besides Combinatory Logic, or if every testing do- 27 main requires its own constructors and attractor series. 28 There is also a mathematical question to solve, namely how we can 29 speed up the selection of the right cells in the test coverage matrix that needs 30 enhancements by new test cases. This is related to sensitivity analysis 31 (Fehlmann & Kranich, 2020) for QFD matrices. If a practical solution ex- 32 ists, this could help us constructing straight attractors instead of searching 33 around. 34 However, such a solution is not yet known to the authors. 35 36 37 Bibliography 38 39 Akao, Y., 1990. Quality Function Deployment - Integrating Customer 40 Requirements into Product Design. Portland, OR: Productivity Press. 41 Aristoteles, 367-344 BCE. Organon. Übersetzt von Julius von Kirschmann, 42 Hofenberg ed. Berlin: Andronikos von Rhodos. 43 Barendregt, H. P., 1977. The Type-Free Lambda-Calculus. In: J. Barwise, ed. 44 Handbook of Math. Logic. Amsterdam: North Holland, pp. 1091 -1132. 45 Barendregt, H. P., 1984. The Lambda Calculus – Its Syntax and Semantics. Studies 46 in logic and the foundations of mathematics ed. Amsterdam: North-Holland. 47 Bimbó, K., 2012. Combinatory Logic - Pure, Applied and Typed. Boca Raton, FL: 48 CRC Press.

16 2021-4180-AJS

1 Church, A., 1941. The Calculi Of Lambda Conversion. Annals Of Mathematical 2 Studies 6. 3 Curry, H., 1930. Grundlagen der kombinatorischen Logik. American Journal of 4 Mathematics, Volume 32, pp. 509-536. 5 Curry, H. & Feys, R., 1958. Combinatory Logic, Vol. I. Amsterdam: North- 6 Holland. 7 Curry, H., Hindley, J. & Seldin, J., 1972. Combinatory Logic, Vol. II. Amsterdam: 8 North-Holland. 9 Deming, W., 1986. Out of the Crisis. Center for Advanced Engineering Study ed. 10 Boston, MA: Massachusetts Institut of Technology. 11 Engeler, E., 1981. Algebras and Combinators. Algebra Universalis, pp. 389-392. 12 Engeler, E., 1995. The Combinatory Programme. Basel, Switzerland: Birkhäuser. 13 Engeler, E., 2019. Neural algebra on "how does the brain think?". Theoretical 14 Computer Science, Volume 777, pp. 296-307. 15 Engeler, E., 2020. Aristotle’ Relations: An Interpretation in Combinatory Logic. 16 arXiv: History and Overview. 17 Engeler, E., 2020. Aristotle’s Relations: An Interpretation in Combinatory Logic, 18 Zurich: unpublished. 19 Fehlmann, T. M., 1981. Theorie und Anwendung des Graphmodells der 20 Kombinatorischen Logik, Zürich, CH: ETH Dissertation 3140-01. 21 Fehlmann, T. M., 2001. QFD as Algebra of Combinators. Tokyo, Japan, 8th 22 International QFD Symposium, ISQFD 2001. 23 Fehlmann, T. M., 2016. Managing Complexity - Uncover the Mysteries with Six 24 Sigma Transfer Functions. Berlin, Germany: Logos Press. 25 Fehlmann, T. M., Jan. 2020. Autonomous Real-time Testing – Testing Artificial 26 Intelligence and Other Complex Systems. Berlin, Germany: Logos Press. 27 Fehlmann, T. M. & Kranich, E., 2014. Exponentially Weighted Moving Average 28 (EWMA) Prediction in the Software Development Process. Rotterdam, NL, 29 IWSM Mensura. 30 Fehlmann, T. M. & Kranich, E., 2020. A Sensitivity Analysis Procedure for QFD, 31 Duisburg: to appear. 32 Fehlmann, T. M. & Kranich, E., May 2020. Intuitionism and Computer Science – 33 Why Computer Scientists do not Like the Axiom of Choice. Athens Journal of 34 Sciences, 7(3), pp. 143-158. 35 ISO/IEC 19761, 2011. Software engineering - COSMIC: a functional size 36 measurement method, Geneva, Switzerland: ISO/IEC JTC 1/SC 7. 37 Potter, M. D., 2004. Set Theory and Its Philosophy. Oxford, UK: Oxford 38 University Press. 39 Schönfinkel, M., 1924. Über die Bausteine der mathematischen Logik. 40 Mathematische Annalen, 92 (3-4), p. 305–316. 41 Turing, A., 1937. On computable numbers, with an application to the 42 Entscheidungsproblem. Proceedings of the London Mathematical Society, 43 42(Series 2), p. pp 230–265. 44 Wolfram, S., 2002. A New Kind of Science. First Edition ed. Champaign, Illinois: 45 Wolfram Media. 46 Zachos, E., 1978. Kombinatorische Logik und S-Terme, Zurich: ETH Dissertation 47 6214. 48

17