Lecture 24 Computational Complexity Theory

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 24 Computational Complexity Theory Lecture 24 Computational Complexity Theory Abhishek Shetty Raghav Malhotra Undergraduate Department Undergraduate Department Indian Institute of Science Indian Institute of Science Instructor Chandan Saha Computer Science and Automation Indian Institute of Science October 29 2015 1 #P-Completeness To define the concept of #P-Complete languages, we look at the problem of whether #P = FP. Let f 2 #P be a function. We define the language Tf = f< x; i >: f(x)i = 1g. A machine M is said to have f oracle access to the function f denoted as M , if it has access to an oracle to the language Tf. With this generalization, we have the following definition. Definition 1.1. Let g : f0; 1g∗ ! f0; 1g∗. We say that g 2 FPf if there exists a polynomial time Turing machine M such that M f computes g. Definition 1.2. Let g : f0; 1g∗ ! f0; 1g∗. Then, g 2 #P-Hard if 8h 2 #P, h 2 FPg. Definition 1.3. #P-Complete = #P \ #P-Hard The following theorem follows clearly from the definition. Theorem 1.1. If 9f : f0; 1g∗ ! f0; 1g∗ such that f 2 FP \ #P-Hard, then #P = FP. Definition 1.4. A language A 2 NP Levin reduces to language B 2 NP if there are poly time computable functions h and f such that, • x 2 A () f(x) 2 B • u is a certificate of x () h(u) is a certificate for B. Definition 1.5. A Levin reduction is said to be parsimonious if the function h is a bijection. From the definition, it is clear that if a language B has a parsimonious reduction from every language A 2 NP, then B 2 #P-Complete. Theorem 1.2. #SAT is #P-Complete Proof. From the Cook-Levin Theorem, we can see that the reduction from every language in NP to SAT is parsimonious. Thus, #SAT is #P-Complete. The question of whether every NP-Complete language corresponds to a #P-Complete language re- mains open. It is known that the counting versions of NP-Complete problems such as HAMCYCLE and CLIQUE are #P-Complete. Interestingly, counting versions of problems such as 2SAT and CYCLE are #P- Complete even though the decision versions are solvable in polynomial time. We have a quick look at another such problem. 1 Definition 1.6. Let A 2 Mn×n(C). Then, the permanent of the matrix is defined as n X Y perm(A) = Aiσ(i) σ2Sn i=1 Theorem 1.3. Let G be a bipartite graph and A be the corresponding biadjacency matrix. Then, the number of perfect matchings in G is equal to the permanent of A. Proof. This is clear since each perfect matching corresponds uniquely to a permutation in the biadjacency matrix. The existence of perfect matchings in a graph can be decided in polytime [Edm65]. But the following theorem tells us that the counting version of the same problem is #P-Complete. Theorem 1.4 ([Val79],[AB09]). Permanent for 0-1 matrices is #P-Complete. 2 Approximation of #P Definition 2.1. An algorithm A is said to be a (1 + )-approximation for f 2 #P if 8x jA(x) − f(x)j ≤ f(x) Definition 2.2. An algorithm A is said to be a Fully Polynomial-Time Approximation Scheme (FPAS) −1 for f 2 #P if 8, A is an (1 + )-approximation algorithm running in time poly(jxj; ). Definition 2.3. An algorithm A is said to be a Fully Polynomial-Time Randomized Approximation −1 Scheme (FPRAS) for f 2 #P if 8, A runs in time poly(jxj; ), and 2 P r[jA(x) − f(x)j ≤ f(x)] ≥ 3 We note that the error probability can be reduced to 2−|xj by repeating the algorithm independantly polynomially many times and outputting the median of values as the answer. From an argument identical to boosting in BPP, we arrive at required reduced error. Various #P-Complete languages behave differently under approximation. The computation of a permanent has an FPRAS [JVV86] while for problems like #CYCLE existence of such schemes would imply P = NP. The approximation of #P problems is considered to be easier than the exact computation. This can be seen from the following two theorems, which shall be proven in the subsequent lectures. Theorem 2.1. 8f 2 #P, f can be approximated by a BPP machine with access to a SAT oracle. Theorem 2.2 ([Tod91],[AB09]). PH ⊂ P#P P SAT P Since from Sipser-Gacs-Lauteman Theorem, BPP ⊂ Σ2 , we have BPP ⊂ Σ3 . But Toda's Theorem implies that PH ⊂ P#P. Thus, unless the polynomial hierarchy collapses the exact computation is strictly harder than the approximation of solutions. We also note that in Theorem 2.1, the requirement of the SAT oracle arises due to the fact that the approximation of #SAT is NP-Hard. If the approximation could have been achieved in BPP, we would have NP 2 BPP implying the collapse of the polynomial hierarchy. Working towards the proofs of the above theorems, we revisit the definition of pairwise independant hash function families which we have seen in the proof of the Goldwasser-Sipser Theorem. n k Definition 2.4. A family of functions Hn;k = fh : f0; 1g ! f0; 1g g is said to be a pairwise independant hash function family if 8x; x0 2 f0; 1gn with x 6= x0 and 8y; y0 2 f0; 1gk, 0 0 −2k P r[h R Hn;k; h(x) = y ^ h(x ) = y ] = 2 The following claim about such functions is clear from the definition. Lemma 2.3. 8x 2 f0; 1gn, 8y 2 f0; 1gk −k P r[h R Hn;k; h(x) = y] = 2 2 n n n Theorem 2.4. Hn;n = fha;b : GF(2 ) ! GF(2 ); a; b 2 GF(2 )g defined by ha;b(x) = ax + b is a pairwise independant hash function family. Proof. Consider the system of equations, ax0 + b = y0 ax + b = y 0 Since, x − x 6= 0, the system has a unique solution in a; b and thus probability ha;b(x) = y and 0 0 −2n ha;b(x ) = y is equal to 2 . Remark. GF(2n) refers to the Galois Field of size 2n which is the unique finite field (upto isomorphism) of order 2n. n GF(2 ) ' F2[x]= < p > where p is an irreducible polynomial of degree n.[Lan02] Other examples of pairwise independant hash functions have been considered in Lecture 17 and are not reconsidered here. Towards the proof of Theorem 2.1, we note that since #SAT 2 #P-Complete, proving that it can be approximated in BPPSAT is sufficient. Also, we note that an 2-approximation scheme is sufficient to prove the theorem since we can take polynomially many copies of the formula on disjoint variable to arrive at an (1 + )-approximation for #SAT. References [AB09] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009. [Edm65] Jack Edmonds. Paths, trees, and flowers. Canad. J. Math., 17:449 { 467, 1965. [JVV86] Mark R Jerrum, Leslie G Valiant, and Vijay V Vazirani. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science, 43:169{188, 1986. [Lan02] Serge Lang. Algebra revised third edition. Graduate Texts in Mathematics, 1(211):ALL{ALL, 2002. [Tod91] Seinosuke Toda. PP is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20(5):865{877, 1991. [Val79] Leslie G Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189{201, 1979. 3.
Recommended publications
  • Database Theory
    DATABASE THEORY Lecture 4: Complexity of FO Query Answering Markus Krotzsch¨ TU Dresden, 21 April 2016 Overview 1. Introduction | Relational data model 2. First-order queries 3. Complexity of query answering 4. Complexity of FO query answering 5. Conjunctive queries 6. Tree-like conjunctive queries 7. Query optimisation 8. Conjunctive Query Optimisation / First-Order Expressiveness 9. First-Order Expressiveness / Introduction to Datalog 10. Expressive Power and Complexity of Datalog 11. Optimisation and Evaluation of Datalog 12. Evaluation of Datalog (2) 13. Graph Databases and Path Queries 14. Outlook: database theory in practice See course homepage [) link] for more information and materials Markus Krötzsch, 21 April 2016 Database Theory slide 2 of 41 How to Measure Query Answering Complexity Query answering as decision problem { consider Boolean queries Various notions of complexity: • Combined complexity (complexity w.r.t. size of query and database instance) • Data complexity (worst case complexity for any fixed query) • Query complexity (worst case complexity for any fixed database instance) Various common complexity classes: L ⊆ NL ⊆ P ⊆ NP ⊆ PSpace ⊆ ExpTime Markus Krötzsch, 21 April 2016 Database Theory slide 3 of 41 An Algorithm for Evaluating FO Queries function Eval(', I) 01 switch (') f I 02 case p(c1, ::: , cn): return hc1, ::: , cni 2 p 03 case : : return :Eval( , I) 04 case 1 ^ 2 : return Eval( 1, I) ^ Eval( 2, I) 05 case 9x. : 06 for c 2 ∆I f 07 if Eval( [x 7! c], I) then return true 08 g 09 return false 10 g Markus Krötzsch, 21 April 2016 Database Theory slide 4 of 41 FO Algorithm Worst-Case Runtime Let m be the size of ', and let n = jIj (total table sizes) • How many recursive calls of Eval are there? { one per subexpression: at most m • Maximum depth of recursion? { bounded by total number of calls: at most m • Maximum number of iterations of for loop? { j∆Ij ≤ n per recursion level { at most nm iterations I • Checking hc1, ::: , cni 2 p can be done in linear time w.r.t.
    [Show full text]
  • Interactive Proof Systems and Alternating Time-Space Complexity
    Theoretical Computer Science 113 (1993) 55-73 55 Elsevier Interactive proof systems and alternating time-space complexity Lance Fortnow” and Carsten Lund** Department of Computer Science, Unicersity of Chicago. 1100 E. 58th Street, Chicago, IL 40637, USA Abstract Fortnow, L. and C. Lund, Interactive proof systems and alternating time-space complexity, Theoretical Computer Science 113 (1993) 55-73. We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: . All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log’ n) space. l All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a l-tape alternating Turing machine with only a constant factor increase in time and space. 1. Introduction In 1981, Chandra et al. [4] introduced alternating Turing machines, an extension of nondeterministic computation where the Turing machine can make both existential and universal moves. In 1985, Goldwasser et al. [lo] and Babai [l] introduced interactive proof systems, an extension of nondeterministic computation consisting of two players, an infinitely powerful prover and a probabilistic polynomial-time verifier. The prover will try to convince the verifier of the validity of some statement.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • If Np Languages Are Hard on the Worst-Case, Then It Is Easy to Find Their Hard Instances
    IF NP LANGUAGES ARE HARD ON THE WORST-CASE, THEN IT IS EASY TO FIND THEIR HARD INSTANCES Dan Gutfreund, Ronen Shaltiel, and Amnon Ta-Shma Abstract. We prove that if NP 6⊆ BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the ¯rst worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one ¯xed samplable distribution that is hard for all probabilistic polynomial-time algorithms, which is a pre-requisite assumption needed for one-way func- tions and cryptography (even if not a su±cient assumption). Neverthe- less, we do show that there is a ¯xed distribution on instances of NP- complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case). Our results are based on the following lemma that may be of independent interest: Given the description of an e±cient (probabilistic) algorithm that fails to solve SAT in the worst case, we can e±ciently generate at most three Boolean formulae (of increasing lengths) such that the algorithm errs on at least one of them. Keywords. Average-case complexity, Worst-case to average-case re- ductions, Foundations of cryptography, Pseudo classes Subject classi¯cation. 68Q10 (Modes of computation (nondetermin- istic, parallel, interactive, probabilistic, etc.) 68Q15 Complexity classes (hierarchies, relations among complexity classes, etc.) 68Q17 Compu- tational di±culty of problems (lower bounds, completeness, di±culty of approximation, etc.) 94A60 Cryptography 2 Gutfreund, Shaltiel & Ta-Shma 1.
    [Show full text]
  • On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly.
    [Show full text]
  • Lecture 12 – the Permanent and the Determinant
    Lecture 12 { The permanent and the determinant Uriel Feige Department of Computer Science and Applied Mathematics The Weizman Institute Rehovot 76100, Israel [email protected] June 23, 2014 1 Introduction Given an order n matrix A, its permanent is X Yn per(A) = aiσ(i) σ i=1 where σ ranges over all permutations on n elements. Its determinant is X Yn σ det(A) = (−1) aiσ(i) σ i=1 where (−1)σ is +1 for even permutations and −1 for odd permutations. A permutation is even if it can be obtained from the identity permutation using an even number of transpo- sitions (where a transposition is a swap of two elements), and odd otherwise. For those more familiar with the inductive definition of the determinant, obtained by developing the determinant by the first row of the matrix, observe that the inductive defini- tion if spelled out leads exactly to the formula above. The same inductive definition applies to the permanent, but without the alternating sign rule. The determinant can be computed in polynomial time by Gaussian elimination, and in time n! by fast matrix multiplication. On the other hand, there is no polynomial time algorithm known for computing the permanent. In fact, Valiant showed that the permanent is complete for the complexity class #P , which makes computing it as difficult as computing the number of solutions of NP-complete problems (such as SAT, Valiant's reduction was from Hamiltonicity). For 0/1 matrices, the matrix A can be thought of as the adjacency matrix of a bipartite graph (we refer to it as a bipartite adjacency matrix { technically, A is an off-diagonal block of the usual adjacency matrix), and then the permanent counts the number of perfect matchings.
    [Show full text]
  • NP-Completeness: Reductions Tue, Nov 21, 2017
    CMSC 451 Dave Mount CMSC 451: Lecture 19 NP-Completeness: Reductions Tue, Nov 21, 2017 Reading: Chapt. 8 in KT and Chapt. 8 in DPV. Some of the reductions discussed here are not in either text. Recap: We have introduced a number of concepts on the way to defining NP-completeness: Decision Problems/Language recognition: are problems for which the answer is either yes or no. These can also be thought of as language recognition problems, assuming that the input has been encoded as a string. For example: HC = fG j G has a Hamiltonian cycleg MST = f(G; c) j G has a MST of cost at most cg: P: is the class of all decision problems which can be solved in polynomial time. While MST 2 P, we do not know whether HC 2 P (but we suspect not). Certificate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language. For example, the language HC above, a certificate could be a sequence of vertices along the cycle. (If the string is not in the language, the certificate can be anything.) NP: is defined to be the class of all languages that can be verified in polynomial time. (Formally, it stands for Nondeterministic Polynomial time.) Clearly, P ⊆ NP. It is widely believed that P 6= NP. To define NP-completeness, we need to introduce the concept of a reduction. Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the class NP) that no one knows how to solve efficiently, but if there were a polynomial time solution for even a single NP-complete problem, then every problem in NP would be solvable in polynomial time.
    [Show full text]
  • Statistical Problems Involving Permutations with Restricted Positions
    STATISTICAL PROBLEMS INVOLVING PERMUTATIONS WITH RESTRICTED POSITIONS PERSI DIACONIS, RONALD GRAHAM AND SUSAN P. HOLMES Stanford University, University of California and ATT, Stanford University and INRA-Biornetrie The rich world of permutation tests can be supplemented by a variety of applications where only some permutations are permitted. We consider two examples: testing in- dependence with truncated data and testing extra-sensory perception with feedback. We review relevant literature on permanents, rook polynomials and complexity. The statistical applications call for new limit theorems. We prove a few of these and offer an approach to the rest via Stein's method. Tools from the proof of van der Waerden's permanent conjecture are applied to prove a natural monotonicity conjecture. AMS subject classiήcations: 62G09, 62G10. Keywords and phrases: Permanents, rook polynomials, complexity, statistical test, Stein's method. 1 Introduction Definitive work on permutation testing by Willem van Zwet, his students and collaborators, has given us a rich collection of tools for probability and statistics. We have come upon a series of variations where randomization naturally takes place over a subset of all permutations. The present paper gives two examples of sets of permutations defined by restricting positions. Throughout, a permutation π is represented in two-line notation 1 2 3 ... n π(l) π(2) π(3) ••• τr(n) with π(i) referred to as the label at position i. The restrictions are specified by a zero-one matrix Aij of dimension n with Aij equal to one if and only if label j is permitted in position i. Let SA be the set of all permitted permutations.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • Some Facts on Permanents in Finite Characteristics
    Anna Knezevic Greg Cohen Marina Domanskaya Some Facts on Permanents in Finite Characteristics Abstract: The permanent’s polynomial-time computability over fields of characteristic 3 for k-semi- 푇 unitary matrices (i.e. n×n-matrices A such that 푟푎푛푘(퐴퐴 − 퐼푛) = 푘) in the case k ≤ 1 and its #3P-completeness for any k > 1 (Ref. 9) is a result that essentially widens our understanding of the computational complexity boundaries for the permanent modulo 3. Now we extend this result to study more closely the case k > 1 regarding the (n-k)×(n-k)- sub-permanents (or permanent-minors) of a unitary n×n-matrix and their possible relations, because an (n-k)×(n-k)-submatrix of a unitary n×n-matrix is generically a k- semi-unitary (n-k)×(n-k)-matrix. The following paper offers a way to receive a variety of such equations of different sorts, in the meantime extending (in its second chapter divided into subchapters) this direction of research to reviewing all the set of polynomial-time permanent-preserving reductions and equations for a generic matrix’s sub-permanents they might yield, including a number of generalizations and formulae (valid in an arbitrary prime characteristic) analogical to the classical identities relating the minors of a matrix and its inverse. Moreover, the second chapter also deals with the Hamiltonian cycle polynomial in characteristic 2 that surprisingly demonstrates quite a number of properties very similar to the corresponding ones of the permanent in characteristic 3, while in the field GF(2) it obtains even more amazing features that are extensions of many well-known results on the parity of Hamiltonian cycles.
    [Show full text]