On Promise Problems
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
On Evaluating Human Problem Solving of Computationally Hard Problems
On Evaluating Human Problem Solving of Computationally Hard Problems Sarah Carruthers1 and Ulrike Stege1 Abstract This article is concerned with how computer science, and more exactly computational complexity theory, can inform cognitive science. In particular, we suggest factors to be taken into account when investigating how people deal with computational hardness. This discussion will address the two upper levels of Marr’s Level Theory: the computational level and the algorithmic level. Our reasons for believing that humans indeed deal with hard cognitive functions are threefold: (1) Several computationally hard functions are sug- gested in the literature, e.g., in the areas of visual search, visual perception and analogical reasoning, linguistic processing, and decision making. (2) People appear to be attracted to computationally hard recreational puzzles and games. Examples of hard puzzles include Sudoku, Minesweeper, and the 15-Puzzle. (3) A number of research articles in the area of human problem solving suggest that humans are capable of solving hard computational problems, like the Euclidean Traveling Salesperson Problem, quickly and near-optimally. This article gives a brief introduction to some theories and foundations of complexity theory and motivates the use of computationally hard problems in human problem solving with a short survey of known results of human performance, a review of some computation- ally hard games and puzzles, and the connection between complexity theory and models of cognitive functions. We aim to illuminate the role that computer science, in particular complexity theory, can play in the study of human problem solving. Theoretical computer science can provide a wealth of interesting problems for human study, but it can also help to provide deep insight into these problems. -
In Search of the Densest Subgraph
algorithms Article In Search of the Densest Subgraph András Faragó and Zohre R. Mojaveri * Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, P.O.B. 830688, MS-EC31, Richardson, TX 75080, USA * Correspondence: [email protected] Received: 23 June 2019; Accepted: 30 July 2019; Published: 2 August 2019 Abstract: In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field. Keywords: dense subgraph; algorithms; graph density; clusters in graphs; big data 1. Introduction and Motivation In the era of big data, graph-based representations became highly popular to model various real-world systems, as well as diverse types of knowledge and data, due to the simplicity and visually appealing nature of graph models. The specific task of extracting a dense subgraph from a large graph has received a lot of attention, since it has direct applications in many fields. While the era of big data is still relatively young, the study and application of dense subgraphs started much earlier. -
Computational Complexity Theory
Computational Complexity Theory Lecture 1: Intro; Turing machines; Class P Department of Computer Science, Indian Institute of Science About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. Computational problems come in various flavors: About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. Computational problems come in various flavors: a. Decision problem Example: Is vertex t reachable from vertex s in graph G? (…output is YES/NO) About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. Computational problems come in various flavors: a. Decision problem b. Search problem Example: Find a satisfying assignment of a Boolean formula, if it exists. About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. Computational problems come in various flavors: a. Decision problem b. Search problem c. Counting problem Example: Find the number of cycles in a graph About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. Computational problems come in various flavors: a. Decision problem b. Search problem c. Counting problem d. Optimization problem Example: Find a minimum size vertex cover in a graph About the course Computational complexity attempts to classify computational problems based on the amount of resources required by algorithms to solve them. -
The Equivalence of Sampling and Searching
The Equivalence of Sampling and Searching Scott Aaronson∗ Abstract n In a sampling problem, we are given an input x ∈{0, 1} , and asked to sample approximately from a probability distribution Dx over poly (n)-bit strings. In a search problem, we are given n an input x ∈{0, 1} , and asked to find a member of a nonempty set Ax with high probability. (An example is finding a Nash equilibrium.) In this paper, we use tools from Kolmogorov complexity to show that sampling and search problems are “essentially equivalent.” More precisely, for any sampling problem S, there exists a search problem RS such that, if C is any “reasonable” complexity class, then RS is in the search version of C if and only if S is in the sampling version. What makes this nontrivial is that the same RS works for every C. As an application, we prove the surprising result that SampP = SampBQP if and only if FBPP = FBQP. In other words, classical computers can efficiently sample the output distribution of every quantum circuit, if and only if they can efficiently solve every search problem that quantum computers can solve. 1 Introduction The Extended Church-Turing Thesis (ECT) says that all computational problems that are feasibly solvable in the physical world are feasibly solvable by a probabilistic Turing machine. By now, there have been almost two decades of discussion about this thesis, and the challenge that quantum computing poses to it. This paper is about a related question that has attracted surprisingly little interest: namely, what exactly should we understand the ECT to state? When we say “all computational problems,” do we mean decision problems? promise problems? search problems? sampling problems? possibly other types of problems? Could the ECT hold for some of these types of problems but fail for others? Our main result is an equivalence between sampling and search problems: the ECT holds for one type of problem if and only if it holds for the other.