Research Statement Prahladh Harsha

1 Introduction

My research interests are in the area of theoretical computer science, with special emphasis on computational complexity. The primary focus of my research has been in the area of probabilistically checkable proofs, and related areas such as property testing and information theory. The fundamental question in computational complexity is “what are the limits of feasible computation?” In fact, one of the articulations of this question is the famous “P vs. NP” question, where P refers to the class of problems that can be solved in polynomial time and NP to the class of problems whose solutions can be verified in polynomial time. To understand the limitations of efficient computation, we first need to understand what we mean by “efficient computation.” This natural connection between feasibility and hardness has many a time led to surprising consequences in complexity theory. One prime example is that of probabilistically checkable proofs. The original emphasis in the study of probabilistically checkable proofs was in the area of program checking. Surprisingly, it was soon realized that the existence of efficient probabilistic program checkers actually implied that the approximation versions of several NP-complete optimization problems were as intractable as the original optimization problems. Probabilistically checkable proofs provide an extremely efficient means of proof verification. The classical complexity class NP refers to the class of languages whose membership can be verified with the aid of a polynomial sized proof. Probabilistically checkable proofs (PCPs) are a means of encoding these proofs (and more generally any mathematical proof) into a format such that the encoded proof can be checked very efficiently, although in a probabilistic manner, by looking at it at only a constant number of locations (in fact, 3 bits suffice!) The main question addressed by my research in this area is the following: “how much does this encoding blow up the original proof while retaining the constant number of queries into the proof, and how efficiently (with respect to running time) can the checking be performed?” An important contribution of my work is the notion of a proof of proximity (also called PCP of proximity). A PCP of proximity is a strengthening of a PCP in the sense that it helps to decide if a statement is true with the help of an additional proof in the form of a PCP, by merely probing the statement at a few locations. In other words, a PCP of proximity makes constant probes not only to the proof but also to the statement whose truth it is checking. With such a stringent requirement, a PCP of proximity cannot distinguish true statements from false; however it can distinguish true statements from ones that are far from being true (in the sense that the statement is far from any true statement in Hamming distance). Thus, a PCP of proximity checks if a given statement is close to being true, without even reading the statement in its entirety! Hence, the name, proof of proximity. PCPs of proximity play a vital role in the construction of short PCPs, both in my work and in subsequent developments in the area of probabilistically checkable proofs. PCPs of proximity are also used in coding theory. All known constructions of locally testable codes are via PCPs of proximity. PCPs of proximity have also come very handy in simplifying the original proof of the PCP Theorem, which is one of the most involved proofs in complexity theory. In fact, the recent fully combinatorial proof of the PCP Theorem (due to Dinur [Din07]) crucially relies on PCPs of proximity. As mentioned above, the main focus of my research has been in the area of probabilistically checkable proofs. I have also worked in other areas such as property testing, information theory, proof complexity, network routing etc. Below I elaborate my work in three of these areas – probabilistically checkable proofs, property testing, and information theory.

2 Probabilistically checkable proofs The PCP Theorem The PCP Theorem [AS98, ALM+98] is one of the crowning achievements of com- plexity theory in the last decade. Probabilistically checkable proofs [BFLS91, FGL+96, AS98], as mentioned

1 earlier, are proofs that allow efficient probabilistic verification based on probing just a few bits of the proof. Informally speaking, the PCP Theorem states that any mathematical proof can be rewritten into a polyno- mially longer probabilistically checkable proof (PCP) such that its veracity can be checked very efficiently, although in a probabilistic manner, by looking at the rewritten proof at only a constant number of locations (in fact, 3 bits suffice) and furthermore proofs of false assertions are rejected with probability at least 1/2. The PCP Theorem has, since its discovery, attracted a lot of attention, motivated by its connection to inapproximability of optimization problems [FGL+96, AS98]. This connection led to a long line of fruitful research yielding inapproximability results (many of them optimal) of several optimization problems (e.g., Set-Cover [Fei98], Max-Clique [H˚as99], MAX-3SAT [H˚as01]). However, the significance of PCPs extends far beyond their applicability to deriving inapproximability results. The mere fact that proofs can be transformed into a format that supports super-fast probabilistic verification is remarkable. One would have naturally expected PCPs, as the name suggests, to lead to vast improvement in automated proof-checkers, theorem-provers, etc. However, unfortunately, this has not been the case. The chief reason why PCPs are not being used today in practice for automated proof-checking is that the blowup of the proof-size involved in all present constructions of PCPs makes it infeasible to do so. Just to put things in perspective, the original proof of the PCP Theorem [ALM+98] constructed PCPs of nearly cubic length with a query complexity roughly of the order of a million (in order to reject proofs of false assertion with probability at least 1/2). On the other hand, the 3-query optimal PCPs of [H˚as01, GLST98] 6 have length nearly n10 , which is still a polynomial! Even with respect to inapproximability results, though the PCP Theorem has been extremely successful in proving tight hardness results, the quantitative nature of these results has been rather unsatisfactory, once again due to the blowup involved in PCP constructions. To understand this it is instructive to compare the inapproximability hardness results obtained from the PCP Theorem with the optimization hardness results obtained from the usual textbook NP-completeness reductions. For example, the NP-completeness reduction from Satisfiability (SAT) to Clique transforms a Boolean formula on n variables to a graph with at most 10n vertices. On the other hand, the PCP reductions which show optimal inapproximability of Max- 6 Clique transform a Boolean formula of size n to a graph of size at least n10 . What these results imply in quantitative terms is that if one assumes solving satisfiability on formulae with 1, 000 variables is intractable, then NP-completeness reductions imply that solving Clique is intractable on graphs with 10, 000 vertices; while, the PCP reductions would imply that the optimal inapproximability hardness results for Max-Clique 6 sets in on graphs of size at least 100010 .

2.1 My research Short PCPs: Most of my work in the area of PCPs has focused on constructing short PCPs. In work done as part of my master’s thesis [HS00], I examine the size and query complexity of PCPs jointly and obtain a construction with reasonable performance in both parameters (more precisely n3 sized proofs with a query complexity of 16 bits). In a more recent work with Ben-Sasson, Goldreich, Sudan and Vadhan [BGH+06], I take a closer look at the PCP Theorem, simplify several parameters and obtain shorter PCPs. In quantitative terms, we obtain PCPs that are at most n · exp(logε n) in the size of the original proof for any ε > 0.

PCPs of proximity and composition: Besides constructing short PCPs, the chief contribution of our work [BGH+06] is the simplification of PCP constructions. Previous construction of PCPs were extremely involved and elaborate. One of the main reasons for this is that “proof composition,” a key ingredient in all known PCP constructions, is a very involved process. We introduce “PCPs of proximity,” (a variant of PCPs mentioned earlier in Section 1) which facilitate very smooth composition – in fact, composition becomes almost definitional and syntactic given this variant. This new variant of PCPs and the corresponding composition have played a critical role in subsequent improvements in short PCP constructions (due to Ben- Sasson and Sudan [BS05] and Dinur [Din07]). Furthermore, these simplifications in the original proof of the PCP Theorem, in the guise of PCPs of proximity and the new composition, led to an alternate purely combinatorial proof of the PCP Theorem, due to Dinur [Din07]. This work [BGH+06] was invited to the

2 special issue of SIAM Journal on Computing on Randomness and Computation as well as the special issue of SIAM Journal on Computing for STOC 2004.

Efficient PCPs: In the context of efficient proof verifiers, the running time of the verification process is as important a parameter as the length of the PCP. In fact, the emphases of the initial work of Babai et al. [BFLS91] in the area of PCPs was the time taken by the verifier and the length of the proof in the new format. In contrast, most succeeding works on PCPs have focused on the query complexity of the verifier and derived many strong inapproximability results for a wide variety of optimization problems; however, no later work seems to have returned to the question of the extreme efficiency of the verifier. This is unfortunate because the efficiency parameters are significant in the context of proof-verification. Furthermore, all short PCP constructions after the work of Babai et al. [BFLS91] achieved their improvements with respect to PCP size by sacrificing the efficiency of the verifier. In a subsequent work with Ben-Sasson, Goldreich, Sudan and Vadhan [BGH+05], I show that this need not be the case and all existing short PCP constructions can be accompanied by an efficient verifier (where the verifier’s efficiency is with respect to running time). More formally, we show that every language in NP has a probabilistically checkable proof where the verifier’s running time is polylogarithmic in the input size and the length of the probabilistically checkable proof is only polylogarithmically larger that the length of the classical proof.

3 Property Testing and Locally Testable Codes Property Testing: Today’s world abounds with massive data-sets and one needs to perform computations on these data-sets where even reading the entire data-set can be prohibitively expensive. This results in the study of sublinear algorithms where one performs computations in time sublinear in the size of the object. An important sub-class of sublinear algorithms is that of property testing. In property testing, one is given the ability to probe a particular object and the task is to determine if the object satisfies a given pre-determined property by making only a very few probes to the object. Since, one is not allowed to read the object in its entirety but only allowed to make a few probes to the object, it is not possible to exactly determine if the object satisfies the property or not. However, one can decide if the object satisfies the property or or is far from having the property (in the sense that the object has to be changed at a considerable number of locations in order to make it satisfy the property). Property Testing has proven useful in several areas of computer science in recent years, especially coding theory.

Coding Theory: An error-correcting code is a set of strings, called codewords, such that any two of them disagree on many positions. The minimum number of positions two distinct codewords differ in is called the distance of the error-correcting code. The ratio of the logarithm of the number of codewords to the dimension of the space is called the rate of the code. Error-correcting codes are considered good if they have linear distance and constant rate. Good error-correcting codes have innumerable applications, e.g., communicating over a noisy channel. Locally testable codes arise naturally from the interaction of these two areas – property testing and coding theory.

Locally testable codes: A code is said to be locally testable if a constant number of queries into a string suffice to determine whether the string is a codeword or is far from all codewords. In other words, one can determine if a given string is a codeword or is far from the code by merely probing the string at a few locations. Locally testable codes have numerous applications and are an essential part of PCP constructions. One of the big open questions in property testing is the construction of good locally testable codes.

My research: In joint work with Ben-Sasson, Goldreich, Sudan and Vadhan [BGH+06], I show how to transform any PCP of proximity into a locally testable code with similar blowup properties. Thus, the short PCP constructions of [BGH+06, BS05, Din07], in turn give rise to locally testable codes of comparable blowup. However, it is to be noted that since all these constructions involve a super linear blowup in the

3 PCP size, we do not get good locally testable codes (in the sense that the rate of the corresponding error correcting code is at least inverse polylogarithmic if not inverse polynomial). On the negative side, in work with Ben-Sasson and Raskhodnikova [BHR05], I throw light on why construction of good locally testable codes might be difficult. Random low density parity check (LDPC) codes are a family of codes which have extremely good error-correcting properties. We show that these codes are not locally testable in a very strong sense. More precisely, we show that most LDPC codes of constant rate are not testable even with a linear number of queries. As an immediate corollary of this result, we obtain properties which are easy to decide, but hard to test.

4 Information Theory

One of the most fundamental quantities in information theory is the notion of Shannon entropy of a random variable. Informally, Shannon entropy (or just entropy) of a random variable X is a measure of the amount of randomness in X. More formally, given a random variable X over a finite sized set X , the entropy of X, P 1 denoted by H[X], is defined to be the quantity x∈X Pr[X = x] log Pr[X=x] . This quantity can be shown to be exactly the minimum expected number of bits required to encode a sample from X (up to +1 additive factor). This interpretation of H[X] in terms of the encoding length of X is a very useful one. Another important quantity in information theory is that of mutual information. Given two random variables X and Y , the mutual information measures the amount of information one random variable has about the other. Formally, mutual information, denoted by I[X : Y ], is defined as H[X] + H[Y ] − H[XY ].

My research: In joint work with Jain, McAllester and Radhakrishnan [HJMR07], I give a characterization of mutual information similar to that of entropy. Our characterization is best understood in terms of the following two-player game: Let (X,Y ) be a pair of random variables. Suppose Alice is given a sample x distributed according to X and needs to send a message z to Bob so that he can generate a correlated sample y distributed according to the conditional distribution Y |X=x. We show that the minimum expected number of bits, that Alice needs to transmit to Bob to achieve the above (in the presence of shared randomness) is the mutual information I[X : Y ] (up to lower order logarithmic terms). As an immediate benefit of this interpretation of mutual information, we obtain a direct sum result in communication complexity, substantially improving on previously known direct sum results [CSWY01, JRS03, JRS05]. Furthermore, this simple interpretation of mutual information lends itself to simpler proofs of several known theorems, e.g., the reverse Shannon theorem [BSST02].

5 Other Research

Besides the above, I have worked in a variety of other areas in theoretical computer science – proof complex- ity [BH03], network routing [HHN+08], resource tradeoffs [HIK+07], and automata theory [KBH99]. Proof Complexity Proof Complexity investigates the question “Do tautologies have short proofs?” The seminal work of Cook and Reckhow [CR79] shows that this question is equivalent to the NP vs. co-NP question. In fact, a proof of NP 6= co-NP immediately proves that NP 6= P. A natural initial approach to attack this problem is to study if there are tautologies that are intractable (i.e., have long proofs) under certain specific proof systems instead of all proof systems. Various proof systems such as resolution, polynomial calculus, Frege-proofs have been extensively studied in this approach. The Frege-proof systems are one of the strongest proof systems and till today have defeated all attempts to show the existence of a true statement that is intractable under this system. Ajtai proved that the pigeon-hole principle is intractable under a restricted version of this proof- system, namely the bounded-depth Frege-proof system [Ajt94]. This result was further strengthened by Beame et al. [BIK+92]. These proofs are some of the most involved proofs in Proof Complexity. In joint work [BH03] with Ben-Sasson, I provide an alternative proof of the intractability of the pigeon- hole principle in the bounded depth Frege proof system using the interpretation of proofs as 2-player games suggested by Pudl´akand Buss [PB94].

4 Network Routing In joint work with Hayes, Narayanan, R¨acke and Radhakrishnan [HHN+08], I consider the problem of minimizing average latency cost while obliviously routing traffic in a network with linear latency functions. We show that for the case when all routing requests are directed to a single target, there is a routing scheme with competitive ratio O(log n), where n denotes the number of nodes in the network.√ As a lower bound we show that no oblivious scheme can obtain a competitive ratio of better than Ω( log n). Resource Tradeoffs In joint work with Ishai, Kilian, Nissim and Venkatesh [HIK+07], I investigate the following question: Is there a computational task that exhibits a strong tradeoff behavior between the amount of communication and the amount of time needed for local computation? Under standard cryptographic assumptions, we show that there is a function f such that f(x, y) is easy to compute (knowing x and y), and has low communication complexities (when one player knows x and the other knows y). However, all low-communication protocols for computing f(x, y) are hard to compute.

6 Thoughts on Future Work

As my previous work indicates, I love to work on fundamental problems in complexity theory and I will continue to do so going into the future. An overarching goal of my research efforts is to gain a better understanding what is tractable and what is not. Like other major open questions in complexity theory, most of the problems I have been working on continue to remain open despite the considerable progress in the last few years. I see myself working towards resolving these questions in the next few years. Three concrete problems that I have been working on and would like to spend some time in the near future are listed below. Short PCPs and efficient proof-verification The big question of whether there exists PCPs of linear size (or even size n · poly log n) with query complexity 3 that rejects proofs of false assertions with probability at least 1/2 continues to remain open. A similar question for locally testable codes (LTCs) is also open. The work of Ben-Sasson and Sudan [BS05] and Dinur [Din07] only demonstrates the existence of PCPs (and LTCs) of size n · poly log n with a large but constant query complexity. In a recent unpublished work (with Ben-Sasson, Lachish and Matsliah) [BHLM07], I show that any construction of PCPs that also yields PCPs of proximity with similar properties cannot be simultane- ously short and have query complexity 3 (and reject proofs of false assertions with probability 1/2). Since all known techniques for PCP constructions also yield PCPs of proximity, this work reveals that completely new ideas are required for short PCP constructions. Quantum analogue of mutual information characterization The mutual information characteriza- tion in our paper [HJMR07], as mentioned before, gives easy proofs of several known results in in- formation theory. The analogue of such a characterization in the quantum information world is still open. Such a quantum characterization would immediately imply the quantum reverse Shannon theo- rem, which is currently known only in certain special cases. Lower bounds in Frege proof systems As mentioned in the earlier section, currently no lower bounds are known for any tautologies in the Frege proof system. I strongly believe that the proof techniques in my work with Ben-Sasson [BH03] can be generalized and hope to prove the intractability of other statements such as random-3CNFs in the Frege proof systems or some restricted form of Frege proof systems.

Besides the above mentioned topics, I am also interested in other areas in theoretical computer science (e.g., derandomization, linear programming, and semi-definite programming based algorithms). In the com- ing years, I look forward to doing stimulating research – broadening my horizons, collaborating with people from different academic backgrounds, learning new techniques, and solving interesting problems.

5 References

[Ajt94] Miklos´ Ajtai. The complexity of the pigeonhole principle. Combinatorica, 14(4):417–433, 1994. (Pre- liminary Version in 20th STOC, 1988). doi:10.1007/BF01302964. [ALM+98] , , , , and Mario Szegedy. Proof verification and the hardness of approximation problems. Journal of the ACM, 45(3):501–555, May 1998. (Preliminary Version in 33rd FOCS, 1992). doi:10.1145/278298.278306. [AS98] Sanjeev Arora and . Probabilistic checking of proofs: A new characterization of NP. Journal of the ACM, 45(1):70–122, January 1998. (Preliminary Version in 33rd FOCS, 1992). doi:10.1145/273865.273901. [BFLS91] Laszl´ o´ Babai, , Leonid A. Levin, and Mario Szegedy. Checking computations in polylogarithmic time. In Proceedings of the 23rd ACM Symposium on Theory of Computing (STOC), pages 21–31. New Orleans, Louisiana, 6–8 May 1991. doi:10.1145/103418.103428. [BGH+05] Eli Ben-Sasson, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and . Short PCPs verifiable in polylogarithmic time. In Proceedings of the 20th IEEE Conference on Computational Complexity, pages 120–134. San Jose, California, 12–15 June 2005. doi:10.1109/CCC.2005.27. [BGH+06] ———. Robust PCPs of proximity, shorter PCPs and applications to coding. SIAM Journal of Computing, 36(4):889–974, 2006. (Special issue on Randomness and Computation; Preliminary Version in 36th STOC, 2004). doi:10.1137/S0097539705446810. [BH03] Eli Ben-Sasson and Prahladh Harsha. Lower bounds for bounded depth Frege proofs via Buss-Pudl´ak games. Technical Report TR03-004, Electronic Colloquium on Computational Complexity, 2003. Available from: http://www.eccc.uni-trier.de/eccc-reports/2003/TR03-004/. [BHLM07] Eli Ben-Sasson, Prahladh Harsha, Oded Lachish, and Arie Matsliah. Sound 3-query PCPPs are long, 2007. Technical Report TR07–127, Electronic Colloquium on Computational Complexity, 2007. Available from: http://eccc.hpi-web.de/eccc-reports/2007/TR07-127/index.html. [BHR05] Eli Ben-Sasson, Prahladh Harsha, and Sofya Raskhodnikova. Some 3CNF properties are hard to test. SIAM Journal of Computing, 35(1):1–21, 2005. (Preliminary Version in 35th STOC, 2003). doi:10.1137/S0097539704445445. [BIK+92] Paul Beame, Russell Impagliazzo, Jan Kraj´ıcekˇ , Toniann Pitassi, Pavel Pudlak´ , and Alan Woods. Exponential lower bounds for the pigeonhole principle. In Proceedings of the 24th ACM Symposium on Theory of Computing (STOC), pages 200–220. Victoria, British Columbia, Canada, 4–6 May 1992. doi:10.1145/129712.129733. [BS05] Eli Ben-Sasson and Madhu Sudan. Simple PCPs with poly-log rate and query complexity. In Proceedings of the 37th ACM Symposium on Theory of Computing (STOC), pages 266–275. Baltimore, Maryland, 21– 24 May 2005. doi:10.1145/1060590.1060631. [BSST02] Charles H. Bennett, Peter W. Shor, John A. Smolin, and Ashish V. Thapliyal. Entanglement- assisted capacity of a quantum channel and the reverse Shannon theorem. IEEE Transactions on Infor- mation Theory, 48(10):2637–2655, October 2002. (Preliminary Version in Proc. Quantum Information: Theory, Experiment and Perspectives Gdansk, Poland, 10 - 18 July 2001). doi:10.1109/TIT.2002.802612. [CR79] Stephen A. Cook and Robert A. Reckhow. The relative efficiency of propositional proof sys- tems. Journal of Symbolic Logic, 44(1):36–50, March 1979. (Preliminary Version in 6th STOC, 1974). doi:10.2307/2273702. [CSWY01] Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, and Andrew Chi-Chih Yao. Informational complexity and the direct sum problem for simultaneous message complexity. In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS), pages 270–278. Las Vegas, Nevada, 14–17 October 2001. doi:10.1109/SFCS.2001.959901. [Din07] . The PCP theorem by gap amplification. Journal of the ACM, 54(3):12, 2007. (Preliminary Version in 38th STOC, 2006). doi:10.1145/1236457.1236459. [Fei98] . A threshold of lnn for approximating set cover. Journal of the ACM, 45(4):634–652, July 1998. (Preliminary Version in 28th STOC, 1996). doi:10.1145/285055.285059.

6 [FGL+96] Uriel Feige, , Laszl´ o´ Lovasz´ , Shmuel Safra, and Mario Szegedy. Interactive proofs and the hardness of approximating cliques. Journal of the ACM, 43(2):268–292, March 1996. (Preliminary version in 32nd FOCS, 1991). doi:10.1145/226643.226652. [GLST98] Venkatesan Guruswami, Daniel Lewin, Madhu Sudan, and Luca Trevisan. A tight characteriza- tion of NP with 3-query PCPs. In Proceedings of the 39th IEEE Symposium on Foundations of Computer Science (FOCS), pages 18–27. Palo Alto, California, 8–11 November 1998. doi:10.1109/SFCS.1998.743424. [H˚as99] Johan H˚astad. Clique is hard to approximate within n1−. Acta Mathematica, 182(1):105–142, 1999. (Preliminary Version in 28th STOC, 1996 and 37th FOCS, 1997). doi:10.1007/BF02392825. [H˚as01] ———. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, July 2001. (Prelim- inary Version in 29th STOC, 1997). doi:10.1145/502090.502098. [HHN+08] Prahladh Harsha, Thomas Hayes, Hariharan Narayanan, Harald Racke¨ , and Jaikumar Rad- hakrishnan. Minimizing average latency in oblivious routing. In Proceedings of the 19th Annual ACM- SIAM Symposium on Discrete Algorithms (SODA), pages 200–207. San Francisco, California, 20–22 Jan- uary 2008. [HIK+07] Prahladh Harsha, Yuval Ishai, Joe Kilian, , and Srinivas Venkatesh. Communica- tion vs. computation. Computational Complexity, 16(1):1–33, 2007. (Preliminary Version in 31st ICALP, 2004). doi:10.1007/s00037-007-0224-y. [HJMR07] Prahladh Harsha, Rahul Jain, David McAllester, and Jaikumar Radhakrishnan. The com- munication complexity of correlation. In Proceedings of the 22nd IEEE Conference on Computational Complexity, pages 10–23. San Diego, California, 13–16 June 2007. doi:10.1109/CCC.2007.32. [HS00] Prahladh Harsha and Madhu Sudan. Small PCPs with low query complexity. Computa- tional Complexity, 9(3–4):157–201, December 2000. (Preliminary Version in 18th STACS, 2001). doi:10.1007/PL00001606. [JRS03] Rahul Jain, Jaikumar Radhakrishnan, and Pranab Sen. A direct sum theorem in communica- tion complexity via message compression. In Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Gerhard J. Woeginger, eds., Proceedings of the 30th International Colloquium of Automata, Languages and Programming (ICALP), volume 2719 of Lecture Notes in Computer Sci- ence, pages 300–315. Springer-Verlag, Eindhoven, Netherlands, 30 June–4 July 2003. Available from: http://link.springer.de/link/service/series/0558/bibs/2719/27190300.htm. [JRS05] ———. Prior entanglement, message compression and privacy in quantum communication. In Proceedings of the 20th IEEE Conference on Computational Complexity, pages 285–296. San Jose, California, 12– 15 June 2005. doi:10.1109/CCC.2005.24. [KBH99] Kamala Krithivasan, Sakthi Balan, and Prahladh Harsha. Distributed processing in au- tomata. International Journal of Foundations of Computer Science, 10(4):443–463, December 1999. doi:10.1142/S0129054199000319. [PB94] Pavel Pudlak´ and Samuel R. Buss. How to lie without being (easily) convicted and the length of proofs in propositional calculus. In Leszek Pacholski and Jerzy Tiuryn, eds., Proceedings of the 8th Interna- tional Workshop, Conference for Computer Science Logic (CSL), volume 933 of Lecture Notes in Computer Science, pages 151–162. Springer, Kazimierz, Poland, 25–30 September 1994. doi:10.1007/BFb0022253.

7