The Complexity of Public-Key Cryptography
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Etsi Gr Qsc 001 V1.1.1 (2016-07)
ETSI GR QSC 001 V1.1.1 (2016-07) GROUP REPORT Quantum-Safe Cryptography (QSC); Quantum-safe algorithmic framework 2 ETSI GR QSC 001 V1.1.1 (2016-07) Reference DGR/QSC-001 Keywords algorithm, authentication, confidentiality, security ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI. -
The Strongish Planted Clique Hypothesis and Its Consequences
The Strongish Planted Clique Hypothesis and Its Consequences Pasin Manurangsi Google Research, Mountain View, CA, USA [email protected] Aviad Rubinstein Stanford University, CA, USA [email protected] Tselil Schramm Stanford University, CA, USA [email protected] Abstract We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time nΩ(log n) (so that the state-of-the-art running time of nO(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best ko(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis. 2012 ACM Subject Classification Theory of computation → Problems, reductions and completeness; Theory of computation → Fixed parameter tractability Keywords and phrases Planted Clique, Densest k-Subgraph, Hardness of Approximation Digital Object Identifier 10.4230/LIPIcs.ITCS.2021.10 Related Version A full version of the paper is available at https://arxiv.org/abs/2011.05555. -
Diamondoid Mechanosynthesis Prepared for the International Technology Roadmap for Productive Nanosystems
IMM White Paper Scanning Probe Diamondoid Mechanosynthesis Prepared for the International Technology Roadmap for Productive Nanosystems 1 August 2007 D.R. Forrest, R. A. Freitas, N. Jacobstein One proposed pathway to atomically precise manufacturing is scanning probe diamondoid mechanosynthesis (DMS): employing scanning probe technology for positional control in combination with novel reactive tips to fabricate atomically-precise diamondoid components under positional control. This pathway has its roots in the 1986 book Engines of Creation, in which the manufacture of diamondoid parts was proposed as a long-term objective by Drexler [1], and in the 1989 demonstration by Donald Eigler at IBM that individual atoms could be manipulated by a scanning tunelling microscope [2]. The proposed DMS-based pathway would skip the intermediate enabling technologies proposed by Drexler [1a, 1b, 1c] (these begin with polymeric structures and solution-phase synthesis) and would instead move toward advanced DMS in a more direct way. Although DMS has not yet been realized experimentally, there is a strong base of experimental results and theory that indicate it can be achieved in the near term. • Scanning probe positional assembly with single atoms has been successfully demonstrated in by different research groups for Fe and CO on Ag, Si on Si, and H on Si and CNHCH3. • Theoretical treatments of tip reactions show that carbon dimers1 can be transferred to diamond surfaces with high fidelity. • A study on tip design showed that many variations on a design turn out to be suitable for accurate carbon dimer placement. Therefore, efforts can be focused on the variations of tooltips of many kinds that are easier to synthesize. -
Algebraic Cryptanalysis of Hidden Field Equations Family
MASARYK UNIVERSITY FACULTY OF INFORMATICS Algebraic cryptanalysis of Hidden Field Equations family BACHELOR'S THESIS Adam Janovsky Brno, Spring 2016 Declaration I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Adam Janovský Advisor: prof. RNDr. Jan Slovák, DrSc. i Abstract The goal of this thesis is to present a Grobner basis as a tool for algebraic cryptanalysis. Firtst, the thesis shortly introduces asymmetric cryptography. Next, a Grobner basis is presented and it is explained how a Grobner basis can be used to algorithmic equation solving. Further, fast algorithm for the Grobner basis computation, the F4, is presented. The thesis then introduces basic variant of Hidden field equations cryptosystem and discusses its vulnerability to algebraic attacks from Grobner basis perspective. Finally, the thesis gives an overview of recent results in algebraic cryptanalysis of Hidden Field Equations and suggests parameters that could make the algebraical attacks intractable. ii Keywords Grobner basis, cryptanalysis, Hidden Field Equations, F4 iii Contents 1 Introduction 1 1.1 Notation 2 2 Preliminaries 3 2.1 Asymmetric cryptography 3 2.1.1 Overview 3 2.1.2 RSA 5 2.2 Grobner basis 7 2.3 Equation solving 14 3 F4 algorithm for computing a Grobner basis 19 4 Hidden Field Equations 27 4.1 Overview of HFE 27 4.1.1 Parameters of HFE 27 4.2 Encryption and Decryption 29 4.2.1 Encryption 29 4.2.2 Decryption 31 4.3 Public key derivation 32 5 Algebraic attacks on HFE 35 5.1 Algebraic attack for q = 2 35 5.2 Algebraic attack for odd q 37 6 Conclusions 41 Appendices 42 A SAGE code 43 B Grobner basis for q=ll 47 A" 1 Introduction The thesis studies algebraic attacks against basic variant of Hidden Field Equations (HFE) cipher. -
Computational Lower Bounds for Community Detection on Random Graphs
JMLR: Workshop and Conference Proceedings vol 40:1–30, 2015 Computational Lower Bounds for Community Detection on Random Graphs Bruce Hajek [email protected] Department of ECE, University of Illinois at Urbana-Champaign, Urbana, IL Yihong Wu [email protected] Department of ECE, University of Illinois at Urbana-Champaign, Urbana, IL Jiaming Xu [email protected] Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA, Abstract This paper studies the problem of detecting the presence of a small dense community planted in a large Erdos-R˝ enyi´ random graph G(N; q), where the edge probability within the community exceeds q by a constant factor. Assuming the hardness of the planted clique detection problem, we show that the computational complexity of detecting the community exhibits the following phase transition phenomenon: As the graph size N grows and the graph becomes sparser according to −α 2 q = N , there exists a critical value of α = 3 , below which there exists a computationally intensive procedure that can detect far smaller communities than any computationally efficient procedure, and above which a linear-time procedure is statistically optimal. The results also lead to the average-case hardness results for recovering the dense community and approximating the densest K-subgraph. 1. Introduction Networks often exhibit community structure with many edges joining the vertices of the same com- munity and relatively few edges joining vertices of different communities. Detecting communities in networks has received a large amount of attention and has found numerous applications in social and biological sciences, etc (see, e.g., the exposition Fortunato(2010) and the references therein). -
Building Secure Public Key Encryption Scheme from Hidden Field Equations
Hindawi Security and Communication Networks Volume 2017, Article ID 9289410, 6 pages https://doi.org/10.1155/2017/9289410 Research Article Building Secure Public Key Encryption Scheme from Hidden Field Equations Yuan Ping,1,2 Baocang Wang,1,3 Yuehua Yang,1 and Shengli Tian1 1 School of Information Engineering, Xuchang University, Xuchang 461000, China 2Guizhou Provincial Key Laboratory of Public Big Data, Guiyang 550025, China 3State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China Correspondence should be addressed to Baocang Wang; [email protected] Received 4 April 2017; Accepted 5 June 2017; Published 10 July 2017 Academic Editor: Dengpan Ye Copyright © 2017 Yuan Ping et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Multivariate public key cryptography is a set of cryptographic schemes built from the NP-hardness of solving quadratic equations over finite fields, amongst which the hidden field equations (HFE) family of schemes remain the most famous. However, the original HFE scheme was insecure, and the follow-up modifications were shown to be still vulnerable to attacks. In this paper, we propose 2 a new variant of the HFE scheme by considering the special equation =defined over the finite field F3 when =0,1. We observe that the equation can be used to further destroy the special structure of the underlying central map of the HFE scheme. It is shown that the proposed public key encryption scheme is secure against known attacks including the MinRank attack, the algebraic attacks, and the linearization equations attacks. -
Practical Parallel Hypergraph Algorithms
Practical Parallel Hypergraph Algorithms Julian Shun [email protected] MIT CSAIL Abstract v0 While there has been significant work on parallel graph pro- e0 cessing, there has been very surprisingly little work on high- v0 v1 v1 performance hypergraph processing. This paper presents a e collection of efficient parallel algorithms for hypergraph pro- 1 v2 cessing, including algorithms for computing hypertrees, hy- v v 2 3 e perpaths, betweenness centrality, maximal independent sets, 2 v k-core decomposition, connected components, PageRank, 3 and single-source shortest paths. For these problems, we ei- (a) Hypergraph (b) Bipartite representation ther provide new parallel algorithms or more efficient imple- mentations than prior work. Furthermore, our algorithms are Figure 1. An example hypergraph representing the groups theoretically-efficient in terms of work and depth. To imple- fv0;v1;v2g, fv1;v2;v3g, and fv0;v3g, and its bipartite repre- ment our algorithms, we extend the Ligra graph processing sentation. framework to support hypergraphs, and our implementations benefit from graph optimizations including switching between improved compared to using a graph representation. Unfor- sparse and dense traversals based on the frontier size, edge- tunately, there is been little research on parallel hypergraph aware parallelization, using buckets to prioritize processing processing. of vertices, and compression. Our experiments on a 72-core The main contribution of this paper is a suite of efficient machine and show that our algorithms obtain excellent paral- parallel hypergraph algorithms, including algorithms for hy- lel speedups, and are significantly faster than algorithms in pertrees, hyperpaths, betweenness centrality, maximal inde- existing hypergraph processing frameworks. -
Resource-Competitive Algorithms1 1 Introduction
Resource-Competitive Algorithms1 Michael A. Bender Varsha Dani Department of Computer Science Department of Computer Science Stony Brook University University of New Mexico Stony Brook, NY, USA Albuquerque, NM, USA [email protected] [email protected] Jeremy T. Fineman Seth Gilbert Department of Computer Science Department of Computer Science Georgetown University National University of Singapore Washington, DC, USA Singapore [email protected] [email protected] Mahnush Movahedi Seth Pettie Department of Computer Science Electrical Eng. and Computer Science Dept. University of New Mexico University of Michigan Albuquerque, NM, USA Ann Arbor, MI, USA [email protected] [email protected] Jared Saia Maxwell Young Department of Computer Science Computer Science and Engineering Dept. University of New Mexico Mississippi State University Albuquerque, NM, USA Starkville, MS, USA [email protected] [email protected] Abstract The point of adversarial analysis is to model the worst-case performance of an algorithm. Un- fortunately, this analysis may not always reflect performance in practice because the adversarial assumption can be overly pessimistic. In such cases, several techniques have been developed to provide a more refined understanding of how an algorithm performs e.g., competitive analysis, parameterized analysis, and the theory of approximation algorithms. Here, we describe an analogous technique called resource competitiveness, tailored for dis- tributed systems. Often there is an operational cost for adversarial behavior arising from band- width usage, computational power, energy limitations, etc. Modeling this cost provides some notion of how much disruption the adversary can inflict on the system. In parameterizing by this cost, we can design an algorithm with the following guarantee: if the adversary pays T , then the additional cost of the algorithm is some function of T . -
Nanorobot Construction Crews
Nanorobot Construction Crews Jaeseung Jeong, Ph.D Department of Bio and Brain engineering, KAIST Nanorobotics • Nanorobotics is the technology of creating machines or robots atltthit or close to the microscopi c scal lfe of a nanomet res (10-9 metres). More specifically, nanorobotics refers to the still largely ‘hypothetical’ nanotechnology engineering discipline of designing and building nanorobots. • Nanorobots ((,,nanobots, nanoids, nanites or nanonites ) would be typically devices ranging in size from 0.1-10 micrometers and constructed of nanoscale or molecular components. As no artificial non-biological nanorobots have yet been created, they remain a ‘hypothetical’ concept. • Another definition sometimes used is a robot which allows precision interactions with nanoscale objects , or can manipulate with nanoscale resolution. • Followingggpp this definition even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. • Also, macroscalble robots or mi crorob ots which can move wi th nanoscale precision can also be considered nanorobots. The T-1000 in Terminator 2: Judggyment Day • Since nanorobots would be microscopic in size , it would probably be necessary for very large numbers of them to work together to perform microscopic and macroscopic tasks. • These nanorobot swarms are fdifound in many sci ence fi fitiction stories, such as The T-1000 in Terminator 2: Judgment Day, nanomachine i n Meta l G ear So lid. • The word "nanobot" (also "nanite",,g, "nanogene", or "nanoant") is often used to indicate this fictional context and is a n info rma l o r eve n pejo rat ive term to refer to the engineering concept of nanorobots. -
A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability
A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability Heng Guo School of Informatics, University of Edinburgh, Informatics Forum, Edinburgh, EH8 9AB, United Kingdom. [email protected] https://orcid.org/0000-0001-8199-5596 Mark Jerrum1 School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London, E1 4NS, United Kingdom. [email protected] https://orcid.org/0000-0003-0863-7279 Abstract We give a fully polynomial-time randomized approximation scheme (FPRAS) for the all-terminal network reliability problem, which is to determine the probability that, in a undirected graph, assuming each edge fails independently, the remaining graph is still connected. Our main contri- bution is to confirm a conjecture by Gorodezky and Pak (Random Struct. Algorithms, 2014), that the expected running time of the “cluster-popping” algorithm in bi-directed graphs is bounded by a polynomial in the size of the input. 2012 ACM Subject Classification Theory of computation → Generating random combinatorial structures Keywords and phrases Approximate counting, Network Reliability, Sampling, Markov chains Digital Object Identifier 10.4230/LIPIcs.ICALP.2018.68 Related Version Also available at https://arxiv.org/abs/1709.08561. Acknowledgements We thank Mark Huber for bringing reference [8] to our attention, Mark Walters for the coupling idea leading to Lemma 12, and Igor Pak for comments on an earlier version. We also thank the organizers of the “LMS – EPSRC Durham Symposium on Markov Processes, Mixing Times and Cutoff”, where part of the work is carried out. 1 Introduction Network reliability problems are extensively studied #P-hard problems [5] (see also [3, 22, 18, 2]). -
Arxiv:1611.01647V4 [Cs.DS] 15 Jan 2019 Resample to Able of Are Class We Special Techniques, Our a with for Inst Extremal Solutions Answer
UNIFORM SAMPLING THROUGH THE LOVASZ´ LOCAL LEMMA HENG GUO, MARK JERRUM, AND JINGCHENG LIU Abstract. We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds new connections between the variable framework of the Lov´asz Local Lemma and some classical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences. 1. Introduction The Lov´asz Local Lemma [9] is a classical gem in combinatorics that guarantees the existence of a perfect object that avoids all events deemed to be “bad”. The original proof is non- constructive but there has been great progress in the algorithmic aspects of the local lemma. After a long line of research [3, 2, 30, 8, 34, 37], the celebrated result by Moser and Tardos [31] gives efficient algorithms to find such a perfect object under conditions that match the Lov´asz Local Lemma in the so-called variable framework. However, it is natural to ask whether, under the same condition, we can also sample a perfect object uniformly at random instead of merely finding one. Roughly speaking, the resampling algorithm by Moser and Tardos [31] works as follows. We initialize all variables randomly. If bad events occur, then we arbitrarily choose a bad event and resample all the involved variables. Unfortunately, it is not hard to see that this algorithm can produce biased samples. -
Four Results of Jon Kleinberg a Talk for St.Petersburg Mathematical Society
Four Results of Jon Kleinberg A Talk for St.Petersburg Mathematical Society Yury Lifshits Steklov Institute of Mathematics at St.Petersburg May 2007 1 / 43 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 / 43 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 2 / 43 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 2 / 43 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 2 / 43 Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams 2 / 43 Part I History of Nevanlinna Prize Career of Jon Kleinberg 3 / 43 Nevanlinna Prize The Rolf Nevanlinna Prize is awarded once every 4 years at the International Congress of Mathematicians, for outstanding contributions in Mathematical Aspects of Information Sciences including: 1 All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.