Results 1 
9 of
9
New methods for 3SAT decision and worstcase analysis
 THEORETICAL COMPUTER SCIENCE
, 1999
"... We prove the worstcase upper bound 1:5045 n for the time complexity of 3SAT decision, where n is the number of variables in the input formula, introducing new methods for the analysis as well as new algorithmic techniques. We add new 2 and 3clauses, called "blocked clauses", generalizing the e ..."
Abstract

Cited by 66 (12 self)
 Add to MetaCart
We prove the worstcase upper bound 1:5045 n for the time complexity of 3SAT decision, where n is the number of variables in the input formula, introducing new methods for the analysis as well as new algorithmic techniques. We add new 2 and 3clauses, called "blocked clauses", generalizing the extension rule of "Extended Resolution." Our methods for estimating the size of trees lead to a refined measure of formula complexity of 3clausesets and can be applied also to arbitrary trees. Keywords: 3SAT, worstcase upper bounds, analysis of algorithms, Extended Resolution, blocked clauses, generalized autarkness. 1 Introduction In this paper we study the exponential part of time complexity for 3SAT decision and prove the worstcase upper bound 1:5044:: n for n the number of variables in the input formula, using new algorithmic methods as well as new methods for the analysis. These methods also deepen the already existing approaches in a systematically manner. The following results...
Subexponential algorithms for Unique Games and related problems
 In 51 st IEEE FOCS
"... We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give: 1. An exp(kn ε)time algorithm that, given as input a kalphabet unique game on n variables that has an assignment satisfying 1 − ε c f ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give: 1. An exp(kn ε)time algorithm that, given as input a kalphabet unique game on n variables that has an assignment satisfying 1 − ε c fraction of its constraints, outputs an assignment satisfying 1 − ε fraction of the constraints. 2. An exp(n ε /δ)time algorithm that, given as input an nvertex regular graph that has a set S of δn vertices with edge expansion at most ε c, outputs a set S ′ of at most δn vertices with edge expansion at most ε. We also obtain a subexponential algorithm with improved approximation for the MultiCut problem, as well as subexponential algorithms with improved approximations to MaxCut, SparsestCut and Vertex Cover on some interesting subclasses of instances. Khot’s Unique Games Conjecture (UGC) states that it is NPhard to achieve approximation guarantees such as ours for unique games. While our results stop short of refusing the UGC, they do suggest that Unique Games is significantly easier than NPhard problems such as 3SAT,3LIN, Label Cover and more, that are believed not to have a subexponential algorithm achieving a nontrivial approximation ratio. The main component in our algorithms is a new result on graph decomposition that may have other applications. Namely we show that for every δ> 0 and a regular nvertex graph G, by changing at most δ fraction of G’s edges, one can break G into disjoint parts so that the induced graph on each part has at most n ε eigenvalues larger than 1 − η (where ε, η depend polynomially on δ). Our results are based on combining this decomposition with previous algorithms for unique games on graphs with few large eigenvalues (Kolla and Tulsiani 2007, Kolla 2010). 1
Easy and Hard Constraint Ranking in Optimality Theory: Algorithms And Complexity
, 2000
"... We consider the problem of ranking a set of OT constraints in a manner consistent with data. (1) We speed up Tesar and Smolensky's RCD algorithm to be linear on the number of constraints. This nds a ranking so each attested form x i beats or ties a particular competitor y i . (2) We also generalize ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We consider the problem of ranking a set of OT constraints in a manner consistent with data. (1) We speed up Tesar and Smolensky's RCD algorithm to be linear on the number of constraints. This nds a ranking so each attested form x i beats or ties a particular competitor y i . (2) We also generalize RCD so each x i beats or ties all possible competitors.
The class of problems that are linearly equivalent to Satisfiability or a uniform method for proving NPcompleteness
, 1995
"... We widely extend the class of problems that are linearly equivalent to Satisfiability. We show that many natural combinatorial problems are lineartime equivalent to Satisfiability (SATequivalent). We prove that the following problems are SATequivalent: 3Colorability, Path with Forbidden ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We widely extend the class of problems that are linearly equivalent to Satisfiability. We show that many natural combinatorial problems are lineartime equivalent to Satisfiability (SATequivalent). We prove that the following problems are SATequivalent: 3Colorability, Path with Forbidden
Algorithms for SAT/TAUT decision based on various measures
 Information and Computation
, 1999
"... We investigate algorithms deciding propositional tautologies for DNF and coNPcomplete subclasses given by restrictions on the number of occurrences of literals. Especially polynomial use of resolution for reductions in combination with a new combinatorial principle called "Generalized Sign Princip ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We investigate algorithms deciding propositional tautologies for DNF and coNPcomplete subclasses given by restrictions on the number of occurrences of literals. Especially polynomial use of resolution for reductions in combination with a new combinatorial principle called "Generalized Sign Principle" is studied. Upper bounds on time complexity are given with exponential part 2 ff\Delta(F ) where the measure (F ) for a clause set F either is the number n(F ) of variables, the number `(F ) of literal occurrences or the number k(F ) of clauses. ff is called a "power coefficient" for the class of formulas under consideration w.r.t. measure . Power coefficients are derived with the help of a method estimating the size of trees, which is also used to find "good" branching rules. Under natural conditions power coefficients ff; fi; fl for n; k; ` respectively fulfill ff fi fl. We obtain the following power coefficients.  0:1112 for DNF w.r.t. `  0:3334 for DNF w.r.t. k These result...
On Quasilinear Time Complexity Theory
, 1994
"... This paper furthers the study of quasilinear time complexity initiated by Schnorr and Gurevich and Shelah. We show that the fundamental properties of the polynomialtime hierarchy carry over to the quasilineartime hierarchy. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper furthers the study of quasilinear time complexity initiated by Schnorr and Gurevich and Shelah. We show that the fundamental properties of the polynomialtime hierarchy carry over to the quasilineartime hierarchy.
Fast and Scalable Parallel Algorithms for KnapsackLike Problems
 Journal of Parallel and Distributed Computing
, 1996
"... We present two new algorithms for searching in sorted X+Y +R+S, one based on heaps and the other on sampling. Each of the algorithms runs in time O(n 2 logn) (n being the size of the sorted arrays X, Y , R and S). Hence in each case, by constructing arrays of size n = O(2 s=4 ), we obtain a new ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We present two new algorithms for searching in sorted X+Y +R+S, one based on heaps and the other on sampling. Each of the algorithms runs in time O(n 2 logn) (n being the size of the sorted arrays X, Y , R and S). Hence in each case, by constructing arrays of size n = O(2 s=4 ), we obtain a new algorithm for solving certain NPComplete problems such as Knapsack on s data items in time equal (up to a constant factor) to the best algorithm currently known. Each of the algorithms is capable of being efficiently implemented in parallel and so solving large instances of these NPComplete problems fast on coarsegrained distributed memory parallel computers. The parallel version of the heap based algorithm is communicationefficient and exhibits optimal speedup for a number of processors less than n using O(n) space in each one; the sampling based algorithm exhibits optimal speedup for any number of processors up to n using O(n) space in total provided that the architecture is capable of...
On superlinear lower bounds in complexity theory
 In Proc. 10th Annual IEEE Conference on Structure in Complexity Theory
, 1995
"... This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the mode ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the model allows nonlocal communication with memory at unit cost. We study a model that imposes a “fair cost ” for nonlocal communication, and obtain modest superlinear lower bounds for some problems via a Kolmogorovcomplexity argument. Then we look to the larger picture of what it will take to prove really striking lower bounds, and pull from ours and others’ work a concept of information vicinity that may offer new tools and modes of analysis to a young field that rather lacks them.
On the Complexity of Circuit Satisfiability (Extended Abstract)
, 2009
"... In this paper, we are concerned with the exponential complexity of the Circuit Satisfiability (CircuitSat) problem and more generally with the exponential complexity of NPcomplete problems. Over the past 15 years or so, researchers have obtained a number of exponentialtime algorithms with improved ..."
Abstract
 Add to MetaCart
In this paper, we are concerned with the exponential complexity of the Circuit Satisfiability (CircuitSat) problem and more generally with the exponential complexity of NPcomplete problems. Over the past 15 years or so, researchers have obtained a number of exponentialtime algorithms with improved running times for exactly solving a variety of NPcomplete problems. The improvements are typically in the form of better exponents compared to exhaustive search. Our goal is to develop techniques to prove specific lower bounds on the exponents under plausible complexity assumptions. We consider natural, though restricted, algorithmic paradigms and prove lower bounds on the exponent of the success probability. Our approach has the advantage of clarifying the relative power of various algorithmic paradigms. Our main technique is a a success probability amplification technique, called the Exponential Amplification Lemma, which shows that for any f(n, m)size bounded probabilistic circuit family A that decides CircuitSat with success probability at least 2 −αn for α < 1 on inputs which are circuits of size m with n variables, there is another probabilistic circuit family B that decides CircuitSat with size roughly f(αn, f(m, n)) and success probability about 2 −α2 n. In