Results 1  10
of
34
Average Case Completeness
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 1991
"... We explain and advance Levin's theory of average case completeness. In particular, we exhibit examples of problems complete in the average case and prove a limitation on the power of deterministic reductions. ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
We explain and advance Levin's theory of average case completeness. In particular, we exhibit examples of problems complete in the average case and prove a limitation on the power of deterministic reductions.
Averagecase computational complexity theory
 Complexity Theory Retrospective II
, 1997
"... ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? The the ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? The theory of averagecase computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new subarea of complexity theory. 1
OneWay Functions, Hard on Average Problems, and Statistical ZeroKnowledge Proofs (Extended Abstract)
 IN PROCEEDINGS OF THE 6TH ANNUAL STRUCTURE IN COMPLEXITY THEORY CONFERENCE
, 1991
"... In this paper, we study connections among oneway functions, hard on the average problems, and statistical zeroknowledge proofs. In particular, we show how these three notions are related and how the third notion can be better characterized, assuming the first one. ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
In this paper, we study connections among oneway functions, hard on the average problems, and statistical zeroknowledge proofs. In particular, we show how these three notions are related and how the third notion can be better characterized, assuming the first one.
Hiding Cliques for Cryptographic Security
 Des. Codes Cryptogr
, 1998
"... We demonstrate how a well studied combinatorial optimization problem may be introduced as a new cryptographic function. The problem in question is that of finding a "large" clique in a random graph. While the largest clique in a random graph is very likely to be of size about 2 log 2 n, it is widely ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We demonstrate how a well studied combinatorial optimization problem may be introduced as a new cryptographic function. The problem in question is that of finding a "large" clique in a random graph. While the largest clique in a random graph is very likely to be of size about 2 log 2 n, it is widely conjectured that no polynomialtime algorithm exists which finds a clique of size (1 + ffl) log 2 n with significant probability for any constant ffl ? 0. We present a very simple method of exploiting this conjecture by "hiding" large cliques in random graphs. In particular, we show that if the conjecture is true, then when a large clique  of size, say, (1+2ffl) log 2 n  is randomly inserted ("hidden") in a random graph, finding a clique of size (1 + ffl) log 2 n remains hard. Our result suggests several cryptographic applications, such as a simple oneway function. 1 Introduction Many hard graph problems involve finding a subgraph of an input graph G = (V; E) with a certain propert...
Simple Strategies for Large ZeroSum Games with Applications to Complexity Theory
 STOC 94
, 1994
"... Von Neumann’s MinMax Theorem guarantees that each player of a zerosum matrix game hss an optimal mixed strategy. We show that each player has a nearoptimal mixed strategy that chooses uniformly from a multiset of pure strategies of size logarithmic in the number of pure strategies available to th ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Von Neumann’s MinMax Theorem guarantees that each player of a zerosum matrix game hss an optimal mixed strategy. We show that each player has a nearoptimal mixed strategy that chooses uniformly from a multiset of pure strategies of size logarithmic in the number of pure strategies available to the opponent. Thus, for exponentially large games, for which even representing an optimal mixed strategy can require exponential space, there are nearoptimal, linearsize strategies. These strategies are eaay to play and serve as small witnesses to the approximate value of the game. Because of the fundamental role of games, we expect this theorem to have many applications in complexity theory and cryptography. We use it to strengthen the connection established by Yao between randomized and distributional complexity and to obtain the following results: (1) Every language has anticheckers — small hard multisets of inputs certifying that small circuits can’t decide the language. (2) Circuits of a given size can generate random instances that are hard for all circuits of linearly smaller size. (3) Given an oracle M for any exponentially large game, the approximate value of the game and nearoptimal strategies for it can be computed in I&‘(M). (4) For any NPcomplete language L, the problems of (a) computing a hard distribution of instances of L and (b) estimating the circuit complexity of L are both in Z;.
The tale of oneway functions
 Problems of Information Transmission
, 2003
"... All the king’s horses, and all the king’s men, Couldn’t put Humpty together again. The existence of oneway functions (owf) is arguably the most important problem in computer theory. The article discusses and refines a number of concepts relevant to this problem. For instance, it gives the first com ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
All the king’s horses, and all the king’s men, Couldn’t put Humpty together again. The existence of oneway functions (owf) is arguably the most important problem in computer theory. The article discusses and refines a number of concepts relevant to this problem. For instance, it gives the first combinatorial complete owf, i.e., a function which is oneway if any function is. There are surprisingly many subtleties in basic definitions. Some of these subtleties are discussed or hinted at in the literature and some are overlooked. Here, a unified approach is attempted. 1
Matrix Transformation is Complete for the Average Case
 SIAM Journal on Computing
, 1995
"... In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the a ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the average case reduction theory is still at an early stage, and only a few average case complete problems are known. We present the first algebraic problem complete for the average case under a natural probability distribution. The problem is this: Given a unimodular matrix X of integers, a set S of linear transformations of such unimodular matrices and a natural number n, decide if there is a product of n (not necessarily different) members of S that takes X to the identity matrix. 1 Introduction The theory of NP completeness is very useful. It allows one to establish that certain NP problems are NP complete and therefore, for all practical purposes, not decidable in polynomial time (PTime)....
Notes on Levin's Theory of AverageCase Complexity
 Electronic Colloquium on Computational Complexity
, 1997
"... Abstract. In 1984, Leonid Levin initiated a theory of averagecase complexity. We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these definitions. Keywords: Averagecase complexity, reductions. This survey is rooted in the author ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Abstract. In 1984, Leonid Levin initiated a theory of averagecase complexity. We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these definitions. Keywords: Averagecase complexity, reductions. This survey is rooted in the author’s (exposition and exploration) work [4], which was partially reproduded in [1]. An early version of this survey appeared as TR97058 of ECCC. Some of the perspective and conclusions were revised in light of a relatively recent work of Livne [21], but an attempt was made to preserve the spirit of the original survey. The author’s current perspective is better reflected in [7, Sec. 10.2] and [8], which advocate somewhat different definitional choices (e.g., focusing on typical rather than average performace of algorithms). 1
On the time complexity of 2tag systems and small universal turing machines
 In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS
, 2006
"... We show that 2tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improve ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
We show that 2tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improves a forty year old result in the area of small universal Turing machines. 1
If NP languages are hard on the worstcase then it is easy to find their hard instances
 PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON COMPUTATIONAL COMPLEXITY, (CCC)
, 2005
"... We prove that if NP 6t, BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We prove that if NP 6t, BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This is the first worstcase to averagecase reduction for NP of any kind.We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which isa prerequisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NPcomplete languages, that is samplable in quasipolynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easy in the worstcase). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that failsto solve SAT in the worstcase, we can efficiently generate at most three Boolean formulas (of increasing