Results 1  10
of
18
Averagecase computational complexity theory
 Complexity Theory Retrospective II
, 1997
"... ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? The the ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? The theory of averagecase computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new subarea of complexity theory. 1
Complete distributional problems, hard languages, and resourcebounded measure
 Theoretical Computer Science
, 2000
"... We say that a distribution µ is reasonable if there exists a constant s ≥ 0 such that µ({x  x  ≥ n}) = Ω ( 1 ns). We prove the following result, which suggests that all DistNPcomplete problems have reasonable distributions. If NP contains a DTIME(2 n)biimmune set, then every DistNPcomplete ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We say that a distribution µ is reasonable if there exists a constant s ≥ 0 such that µ({x  x  ≥ n}) = Ω ( 1 ns). We prove the following result, which suggests that all DistNPcomplete problems have reasonable distributions. If NP contains a DTIME(2 n)biimmune set, then every DistNPcomplete set has a reasonable distribution. It follows from work of Mayordomo [May94] that the consequent holds if the pmeasure of NP is not zero. Cai and Selman [CS96] defined a modification and extension of Levin’s notion of average polynomial time to arbitrary timebounds and proved that if L is Pbiimmune, then L is distributionally hard, meaning, that for every polynomialtime computable distribution µ, the distributional problem (L, µ) is not polynomial on the µaverage. We prove the following results, which suggest that distributional hardness is closely related to more traditional notions of hardness. 1. If NP contains a distributionally hard set, then NP contains a Pimmune set. 2. There exists a language L that is distributionally hard but not Pbiimmune if and only if P contains a set that is immune to all Pprintable sets. The following corollaries follow readily 1. If the pmeasure of NP is not zero, then there exists a language L that is distributionally hard but not Pbiimmune. 2. If the p2measure of NP is not zero, then there exists a language L in NP that is distributionally hard but not Pbiimmune. 1
Publickey cryptography and invariant theory, arXiv:math.cs. CR/0207080
"... Publickey cryptosystems are suggested based on invariants of groups. We give also an overview of known cryptosystems which involve groups. 1 ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Publickey cryptosystems are suggested based on invariants of groups. We give also an overview of known cryptosystems which involve groups. 1
Reductions Do Not Preserve Fast Convergence Rates in Average Time
 ALGORITHMICA
, 1996
"... Cai and Selman [CS96] proposed a general definition of average computation time that, when applied to polynomials, results in a modification of Levin's [Lev86] notion of averagepolynomialtime. The effect of the modification is to control the rate of convergence of the expressions that define ave ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Cai and Selman [CS96] proposed a general definition of average computation time that, when applied to polynomials, results in a modification of Levin's [Lev86] notion of averagepolynomialtime. The effect of the modification is to control the rate of convergence of the expressions that define average computation time. With this modification, they proved a hierarchy theorem for averagetime complexity that is as tight as the HartmanisStearns [HS65] hierarchy theorem for worstcase deterministic time. They also proved that under a fairly reasonable condition on distributions, called condition W, a distributional problem is solvable in averagepolynomialtime under the modification exactly when it is solvable in averagepolynomialtime under Levin's denition. Various notions of reductions, as defined by Levin [Lev86] and others, play a central role in the study of averagecase complexity. However, the class of distributional problems that are solvable in averagepolynomialtime under the modification is not closed under
the standard reductions. In particular, we prove that there is a distributional problem that is not solvable in averagepolynomialtime under the modication but is reducible, by the identity function, t...
New Combinatorial Complete OneWay Functions
 in Proc. 25th Sympos. on Theoretical Aspects of Computer Science
, 2008
"... In 2003, Leonid A. Levin presented the idea of a combinatorial complete oneway function and a sketch of the proof that Tiling represents such a function. In this paper, we present two new oneway functions based on semiThue string rewriting systems and a version of the Post Correspondence Problem ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
In 2003, Leonid A. Levin presented the idea of a combinatorial complete oneway function and a sketch of the proof that Tiling represents such a function. In this paper, we present two new oneway functions based on semiThue string rewriting systems and a version of the Post Correspondence Problem and prove their completeness. Besides, we present an alternative proof of Levin’s result. We also discuss the properties a combinatorial problem should have in order to hold a complete oneway function. 1
Asymptotic density and computably enumerable sets (tentative title), in preparation
"... Abstract. We study connections between classical asymptotic density, computability and computable enumerability. In an earlier paper, the second two authors proved that there is a computably enumerable set A of density 1 with no computable subset of density 1. In the current paper, we extend this re ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. We study connections between classical asymptotic density, computability and computable enumerability. In an earlier paper, the second two authors proved that there is a computably enumerable set A of density 1 with no computable subset of density 1. In the current paper, we extend this result in three different ways: (i) The degrees of such sets A are precisely the nonlow c.e. degrees. (ii) There is a c.e. set A of density 1 with no computable subset of nonzero density. (iii) There is a c.e. set A of density 1 such that every subset of A of density 1 is of high degree. We also study the extent to which c.e. sets A can be approximated by their computable subsets B in the sense that A \ B has small density. There is a very close connection between the computational complexity of a set and the arithmetical complexity of its density and we characterize the lower densities, upper densities and densities of both computable and computably enumerable sets. We also study the notion of “computable at density r ” where r is a real in the unit interval. Finally, we study connections between density and classical smallness notions such as immunity, hyperimmunity, and cohesiveness. 1.
AverageCase Complexity Theory and PolynomialTime Reductions
, 2001
"... This thesis studies averagecase complexity theory and polynomialtime reducibilities. The issues in averagecase complexity arise primarily from Cai and Selman's extension of Levin's denition of average polynomial time. We study polynomialtime reductions between distributional problems. Under stro ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis studies averagecase complexity theory and polynomialtime reducibilities. The issues in averagecase complexity arise primarily from Cai and Selman's extension of Levin's denition of average polynomial time. We study polynomialtime reductions between distributional problems. Under strong but reasonable hypotheses we separate ordinary NPcompleteness notions.
GENERIC COMPUTABILITY, TURING DEGREES, AND ASYMPTOTIC DENSITY
"... Abstract. Generic decidability has been extensively studied in group theory, and we now study it in the context of classical computability theory. A set A of natural numbers is called generically computable if there is a partial computable function which agrees with the characteristic function of A ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Generic decidability has been extensively studied in group theory, and we now study it in the context of classical computability theory. A set A of natural numbers is called generically computable if there is a partial computable function which agrees with the characteristic function of A on its domain D, and furthermore D has density 1, i.e. limn→ ∞ {k < n: k ∈ D}/n = 1. A set A is called coarsely computable if there is a computable set R such that the symmetric difference of A and R has density 0. We prove that there is a c.e. set which is generically computable but not coarsely computable and vice versa. We show that every nonzero Turing degree contains a set which is not generically computable and also a set which is not coarsely computable. We prove that there is a c.e. set of density 1 which has no computable subset of density 1. Finally, we define and study generic reducibility. 1.
The Complexity of Generating Test Instances
 in Proc. STACS'97, Lecture Notes in Computer Science
, 1997
"... Recently, Watanabe proposed a new framework for testing the correctness and average case behavior of algorithms that purport to solve a given NP search problem efficiently on average. The idea is to randomly generate certified instances in a way that resembles the underlying distribution ¯. We discu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Recently, Watanabe proposed a new framework for testing the correctness and average case behavior of algorithms that purport to solve a given NP search problem efficiently on average. The idea is to randomly generate certified instances in a way that resembles the underlying distribution ¯. We discuss this approach and show that test instances can be generated for every NP search problem with nonadaptive queries to an NP oracle. Further, we introduce Las Vegas as well as Monte Carlo types of test instance generators and show that these generators can be used to find out whether an algorithm is correct and efficient on average under ¯. In fact, it is not hard to construct Monte Carlo generators for all RP search problems as well as Las Vegas generators for all ZPP search problems. On the other hand, we prove that Monte Carlo generators can only exist for problems in NP " coAM. 1 Introduction The class NP plays a central role in computational complexity theory since it contains a larg...
On the Complexity of Deadlock Detection in Families of Planar Nets
, 1995
"... We are interested in some properties of massively parallel computers that we model by finite automata connected together as a 2dimensional grid. We wonder whether it is possible to anticipate a possible appearance of a deadlock in such nets. Thus, we look for efficient algorithms to predict whe ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We are interested in some properties of massively parallel computers that we model by finite automata connected together as a 2dimensional grid. We wonder whether it is possible to anticipate a possible appearance of a deadlock in such nets. Thus, we look for efficient algorithms to predict whether deadlocks can appear in grids of bounded size. From the point of view of worstcase complexity, we prove that this problem is NPcomplete whereas it is quadratic for linear structures. The method we use is a reduction from a tiling problem. We also prove that this problem, associated with a natural probability distribution on its instances, is RNPcomplete (Random NPcomplete) in the theory proposed by Levin and Gurevich. Very few randomized problems are known to be RNPcomplete. Under classical complexity hypotheses, this result proves that there does not exist any algorithm that solves this problem efficiently on average case. We present others extentions of our results for d...