Results 1  10
of
215
A New Kind of Science
, 2002
"... “Somebody says, ‘You know, you people always say that space is continuous. How do you know when you get to a small enough dimension that there really are enough points in between, that it isn’t just a lot of dots separated by little distances? ’ Or they say, ‘You know those quantum mechanical amplit ..."
Abstract

Cited by 715 (0 self)
 Add to MetaCart
(Show Context)
“Somebody says, ‘You know, you people always say that space is continuous. How do you know when you get to a small enough dimension that there really are enough points in between, that it isn’t just a lot of dots separated by little distances? ’ Or they say, ‘You know those quantum mechanical amplitudes you told me about, they’re so complicated and absurd, what makes you think those are right? Maybe they aren’t right. ’ Such remarks are obvious and are perfectly clear to anybody who is working on this problem. It does not do any good to point this out.” —Richard Feynman [1, p.161]
Finding Hard Instances of the Satisfiability Problem: A Survey
, 1997
"... . Finding sets of hard instances of propositional satisfiability is of interest for understanding the complexity of SAT, and for experimentally evaluating SAT algorithms. In discussing this we consider the performance of the most popular SAT algorithms on random problems, the theory of average case ..."
Abstract

Cited by 128 (1 self)
 Add to MetaCart
. Finding sets of hard instances of propositional satisfiability is of interest for understanding the complexity of SAT, and for experimentally evaluating SAT algorithms. In discussing this we consider the performance of the most popular SAT algorithms on random problems, the theory of average case complexity, the threshold phenomenon, known lower bounds for certain classes of algorithms, and the problem of generating hard instances with solutions.
Relations Between Average Case Complexity and Approximation Complexity (Extended Abstract)
 In Proceedings of the 34th Annual ACM Symposium on Theory of Computing
, 2002
"... We investigate relations between average case complexity and the complexity of approximation. Our preliminary findings indicate that this is a research direction that leads to interesting insights. Under the assumption that refuting 3SAT is hard on average on a natural distribution, we derive hardne ..."
Abstract

Cited by 123 (9 self)
 Add to MetaCart
We investigate relations between average case complexity and the complexity of approximation. Our preliminary findings indicate that this is a research direction that leads to interesting insights. Under the assumption that refuting 3SAT is hard on average on a natural distribution, we derive hardness of approximation results for min bisection, dense ksubgraph, max bipartite clique and the 2catalog segmentation problem. No NPhardness of approximation results are currently known for these problems.
On the Theory of Average Case Complexity
 Journal of Computer and System Sciences
, 1997
"... This paper takes the next step in developing the theory of average case complexity initiated by Leonid A Levin. Previous works [Levin 84, Gurevich 87, Venkatesan and Levin 88] have focused on the existence of complete problems. We widen the scope to other basic questions in computational complexity. ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
This paper takes the next step in developing the theory of average case complexity initiated by Leonid A Levin. Previous works [Levin 84, Gurevich 87, Venkatesan and Levin 88] have focused on the existence of complete problems. We widen the scope to other basic questions in computational complexity. Our results include: ffl the equivalence of search and decision problems in the context of average case complexity; ffl an initial analysis of the structure of distributionalNP (i.e. NP problems coupled with "simple distributions") under reductions which preserve average polynomialtime; ffl a proof that if all of distributionalNP is in average polynomialtime then nondeterministic exponentialtime equals deterministic exponential time (i.e., a collapse in the worst case hierarchy); ffl definitions and basic theorems regarding other complexity classes such as average logspace. An exposition of the basic definitions suggested by Levin and suggestions for some alternative definitions ...
On the spheredecoding algorithm I. Expected complexity
 IEEE Trans. Sig. Proc
, 2005
"... Abstract—The problem of finding the leastsquares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The ..."
Abstract

Cited by 117 (7 self)
 Add to MetaCart
(Show Context)
Abstract—The problem of finding the leastsquares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NPhard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worstcase complexity of the integer leastsquares problem, we study its expected complexity, averaged over the noise and over the lattice. For the “sphere decoding” algorithm of Fincke and Pohst, we find a closedform expression for the expected complexity, both for the infinite and finite lattice.
A personal view of averagecase complexity
 in 10th IEEE annual conference on structure in complexity theory, IEEE computer society press. Washington DC
, 1995
"... The structural theory of averagecase complexity, introduced by Levin, gives a formal setting for discussing the types of inputs for which a problem is dicult. This is vital to understanding both when a seemingly dicult (e.g. NPcomplete) problem is actually easy on almost all instances, and to d ..."
Abstract

Cited by 87 (0 self)
 Add to MetaCart
(Show Context)
The structural theory of averagecase complexity, introduced by Levin, gives a formal setting for discussing the types of inputs for which a problem is dicult. This is vital to understanding both when a seemingly dicult (e.g. NPcomplete) problem is actually easy on almost all instances, and to determining which problems might be suitable for applications requiring hard problems, such as cryptography. This paper attempts to summarize the state of knowledge in this area, including some \folklore " results that have not explicitly appeared in print. We also try to standardize and unify denitions. Finally, we indicate what we feel are interesting research directions. We hope that this paper will motivate more research in this area and provide an introduction to the area for people new to it.
Average Case Completeness
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 1991
"... We explain and advance Levin's theory of average case completeness. In particular, we exhibit examples of problems complete in the average case and prove a limitation on the power of deterministic reductions. ..."
Abstract

Cited by 77 (2 self)
 Add to MetaCart
We explain and advance Levin's theory of average case completeness. In particular, we exhibit examples of problems complete in the average case and prove a limitation on the power of deterministic reductions.
Parameterized Complexity: A Framework for Systematically Confronting Computational Intractability
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1997
"... In this paper we give a programmatic overview of parameterized computational complexity in the broad context of the problem of coping with computational intractability. We give some examples of how fixedparameter tractability techniques can deliver practical algorithms in two different ways: (1) by ..."
Abstract

Cited by 77 (16 self)
 Add to MetaCart
(Show Context)
In this paper we give a programmatic overview of parameterized computational complexity in the broad context of the problem of coping with computational intractability. We give some examples of how fixedparameter tractability techniques can deliver practical algorithms in two different ways: (1) by providing useful exact algorithms for small parameter ranges, and (2) by providing guidance in the design of heuristic algorithms. In particular, we describe an improved FPT kernelization algorithm for Vertex Cover, a practical FPT algorithm for the Maximum Agreement Subtree (MAST) problem parameterized by the number of species to be deleted, and new general heuristics for these problems based on FPT techniques. In the course of making this overview, we also investigate some structural and hardness issues. We prove that an important naturally parameterized problem in artificial intelligence, STRIPS Planning (where the parameter is the size of the plan) is complete for W [1]. As a corollary, this implies that kStep Reachability for Petri Nets is complete for W [1]. We describe how the concept of treewidth can be applied to STRIPS Planning and other problems of logic to obtain FPT results. We describe a surprising structural result concerning the top end of the parameterized complexity hierarchy: the naturally parameterized Graph kColoring problem cannot be resolved with respect to XP either by showing membership in XP, or by showing hardness for XP without settling the P = NP question one way or the other.
Randomness vs. Time: Derandomization under a uniform assumption
 Journal of Computer and System Sciences
, 1998
"... We prove that if BPP 6= EXP, then every problem in BPP can be solved deterministically in subexponential time on almost every input ( on every samplable ensemble for infinitely many input sizes). This is the first derandomization result for BPP based on uniform, noncryptographic hardness assumptions ..."
Abstract

Cited by 76 (11 self)
 Add to MetaCart
We prove that if BPP 6= EXP, then every problem in BPP can be solved deterministically in subexponential time on almost every input ( on every samplable ensemble for infinitely many input sizes). This is the first derandomization result for BPP based on uniform, noncryptographic hardness assumptions. It implies the following gap in the averageinstance complexities of problems in BPP : either these complexities are always subexponential or they contain arbitrarily large exponential functions. We use a construction of a small "pseudorandom " set of strings from a "hard function" in EXP which is identical to that used in the analogous nonuniform results of [21, 3]. However, previous proofs of correctness assume the "hard function" is not in P=poly. They give a nonconstructive argument that a circuit distinguishing the pseudorandom strings from truly random strings implies that a similarlysized circuit exists computing the "hard function". Our main technical contribution is to show ...