Results 1  10
of
12
Linear degree extractors and the inapproximability of max clique and chromatic number
 THEORY OF COMPUTING
, 2007
"... ... that for all ε> 0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1−ε are NPhard. We further derandomize results of Khot (FOCS ’01) and show that for some γ> 0, no quasipolynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n/2 (logn)1−γ, unless N˜P = ˜P. The ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
... that for all ε> 0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1−ε are NPhard. We further derandomize results of Khot (FOCS ’01) and show that for some γ> 0, no quasipolynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n/2 (logn)1−γ, unless N˜P = ˜P. The key to these results is a new construction of dispersers, which are related to randomness extractors. A randomness extractor is an algorithm which extracts randomness from a lowquality random source, using some additional truly random bits. We construct new extractors which require only log2 n + O(1) additional random bits for sources with constant entropy rate, and have constant error. Our dispersers use an arbitrarily small constant
Complexity of wavelength assignment in optical network optimization
 In Proceedings of the 25nd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM
, 2006
"... Abstract — We study the complexity of a spectrum of design problems for optical networks in order to carry a set of demands. Under wavelength division multiplexing (WDM) technology, demands sharing a common fiber are transported on distinct wavelengths. Multiple fibers may be deployed on a physical ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract — We study the complexity of a spectrum of design problems for optical networks in order to carry a set of demands. Under wavelength division multiplexing (WDM) technology, demands sharing a common fiber are transported on distinct wavelengths. Multiple fibers may be deployed on a physical link. Our basic goal is to design networks of minimum cost, minimum congestion and maximum throughput. This translates to three variants in the design objectives: 1) MINSUMFIBER: minimizing the total amount of fibers deployed to carry all demands; 2) MINMAXFIBER: minimizing the maximum amount of fibers per link to carry all demands; and 3) MAXTHRUPUT: maximizing the carried demands using a given set of fibers. We also have two variants in the design constraints: 1) CHOOSEROUTE: Here we specify both a routing path and a wavelength for each demand; 2) FIXEDROUTE: Here we are given demand routes and we specify wavelengths only. The FIXEDROUTE variant allows us to study wavelength assignment in isolation. Combining these variants, we have six design problems. In [4], [3] we have shown that general instances of the problems MINSUMFIBERCHOOSEROUTE and MINMAXFIBERFIXEDROUTE have no constantapproximation algorithms. In this paper we prove that a similar statement holds for all four other problems. Our main result shows that MINSUMFIBERFIXEDROUTE cannot be approximated within any constant factor unless NPhard problems have efficient algorithms. This, together with the hardness of MINMAXFIBERFIXEDROUTE in [3], shows that the problem of wavelength assignment is inherently hard by itself. We also study the complexity of problems that arise when multiple demands can be timemultiplexed onto a single wavelength (as in TWIN networks) and when wavelength converters can be placed along the path of a demand. I.
Hardness of Routing with Congestion in Directed Graphs
 In Proc. of STOC
, 2007
"... Given as input a directed graph on N vertices and a set of sourcedestination pairs, we study the problem of routing the maximum possible number of sourcedestination pairs on paths, such that at most c(N) paths go through any edge. We show that the problem is hard to approximate within an N Ω(1/c(N ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Given as input a directed graph on N vertices and a set of sourcedestination pairs, we study the problem of routing the maximum possible number of sourcedestination pairs on paths, such that at most c(N) paths go through any edge. We show that the problem is hard to approximate within an N Ω(1/c(N)) factor even when we compare to the optimal solution that routes pairs on edgedisjoint paths, assuming NP doesn’t have N O(log log N)time randomized algorithms. Here the congestion c(N) can be any function in the range 1 � c(N) � α log N / log log N for some absolute constant α> 0. The hardness result is in the right ballpark since a factor N O(1/c(N)) approximation algorithm is known for this problem, via rounding a natural multicommodityflow relaxation. We also give a simple integrality gap construction that shows that the multicommodityflow relaxation has an integrality gap of N Ω(1/c) for c ranging from 1 to Θ( log n log log n). A solution to the routing problem involves selecting which pairs to be routed and what paths to assign to each routed pair. Two natural restrictions can be placed on input instances to eliminate one of these aspects of the problem complexity. The first restriction is to consider instances with perfect completeness; an optimal solution is able to route all pairs with congestion 1 in such instances. The second restriction to consider is the unique paths property where each sourcedestination pair has a unique path connecting ∗ Supported by a grant of the state of New Jersey to the
Inapproximability of edgedisjoint paths and low congestion routing on undirected graphs
 Combinatorica
, 2010
"... In the undirected EdgeDisjoint Paths problem with Congestion (EDPwC), we are given an undirected graph with V nodes, a set of terminal pairs and an integer c. The objective is to route as many terminal pairs as possible, subject to the constraint that at most c demands can be routed through any edg ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
In the undirected EdgeDisjoint Paths problem with Congestion (EDPwC), we are given an undirected graph with V nodes, a set of terminal pairs and an integer c. The objective is to route as many terminal pairs as possible, subject to the constraint that at most c demands can be routed through any edge in the graph. When c = 1, the problem is simply referred to as the EdgeDisjoint Paths (EDP) problem. In this paper, we study the hardness of EDPwC in undirected graphs. Our main result is that for every ε> 0 there exists an α> 0 such that for 1 � c � αlog log V log log log V, it is hard to distinguish between instances where we can route all terminal pairs on edgedisjoint paths, and instances where we can route at most a 1/(log V) 1−ε c+2 fraction of the terminal pairs, even if we allow congestion c. This implies a (log V) 1−ε c+2 hardness of approximation
Sound 3query PCPPs are long
, 2008
"... We initiate the study of the tradeoff between the length of a probabilistically checkable proof of proximity (PCPP) and the maximal soundness that can be guaranteed by a 3query verifier with oracle access to the proof. Our main observation is that a verifier limited to querying a short proof cannot ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We initiate the study of the tradeoff between the length of a probabilistically checkable proof of proximity (PCPP) and the maximal soundness that can be guaranteed by a 3query verifier with oracle access to the proof. Our main observation is that a verifier limited to querying a short proof cannot obtain the same soundness as that obtained by a verifier querying a long proof. Moreover, we quantify the soundness deficiency as a function of the prooflength and show that any verifier obtaining “best possible” soundness must query an exponentially long proof. In terms of techniques, we focus on the special class of inspective verifiers that read at most 2 proofbits per invocation. For such verifiers we prove exponential lengthsoundness tradeoffs that are later on used to imply our main results for the case of general (i.e., not necessarily inspective) verifiers. To prove the exponential tradeoff for inspective verifiers we show a connection between PCPP proof length and propertytesting query complexity, that may be of independent interest. The connection is that any linear property that can be verified with proofs of length ℓ by linear inspective verifiers must be testable with query complexity ≈ log ℓ.
Using the FGLSSreduction to Prove Inapproximability Results for Minimum Vertex Cover in Hypergraphs
 in Hypergraphs. Electronic Colloquium on Computational Complexity (ECCC) 102
, 2001
"... Using known results regarding PCP, we present simple proofs of the inapproximability of vertex cover for hypergraphs. Specifically, we show that 1. Approximating the size of the minimum vertex cover in O(1)regular hypergraphs to within a factor of 1.99999 is NPhard. ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Using known results regarding PCP, we present simple proofs of the inapproximability of vertex cover for hypergraphs. Specifically, we show that 1. Approximating the size of the minimum vertex cover in O(1)regular hypergraphs to within a factor of 1.99999 is NPhard.
ThreeQuery PCPs with Perfect Completeness over nonBoolean Domains
 In Proceedings of the 18th IEEE Conference on Computational Complexity
, 2002
"... We study nonBoolean PCPs that have perfect completeness and read three positions from the proof. For the case when the proof consists of values from a domain of size d for some integer constant d 2, we construct a nonadaptive PCP with perfect completeness and soundness d +", for any cons ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We study nonBoolean PCPs that have perfect completeness and read three positions from the proof. For the case when the proof consists of values from a domain of size d for some integer constant d 2, we construct a nonadaptive PCP with perfect completeness and soundness d +", for any constant " > 0, and an adaptive PCP with perfect completeness and + ", for any constant " > 0. These results match the best known constructions for the case d = 2 and our proofs also show that the particular predicates we use in our PCPs are nonapproximable beyond the random assignment threshold.
Complexity theory, proofs and approximation
 EUROPEAN CONGRESS OF MATHEMATICS
, 2005
"... We give a short introduction to some questions in complexity theory and proceed to give some recent developments. In particular, we discuss probabilistically checkable proofs and their applications in establishing inapproximability results. In a traditional proof the proofchecker reads the entire pr ..."
Abstract
 Add to MetaCart
We give a short introduction to some questions in complexity theory and proceed to give some recent developments. In particular, we discuss probabilistically checkable proofs and their applications in establishing inapproximability results. In a traditional proof the proofchecker reads the entire proof and decides deterministically whether the proof is correct. In a probabilistically checkable proof the proofchecker randomly verifies only a very small portion of the proof but still cannot be fooled into accepting a false claim except with small probability.
More Efficient . . . Improved Approximation Hardness of Maximum CSP
 IN PROCEEDINGS OF THE 22TH SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 2005
"... In the PCP model, a verifier is supposed to probabilistically decide if a given input belongs to some language by posing queries to a purported proof of this fact. The probability that the verifier accepts an input in the language given a correct proof is called the completeness c; the probabilit ..."
Abstract
 Add to MetaCart
In the PCP model, a verifier is supposed to probabilistically decide if a given input belongs to some language by posing queries to a purported proof of this fact. The probability that the verifier accepts an input in the language given a correct proof is called the completeness c; the probability that the verifier rejects an input not in the language given any proof is called the soundness s. For a verifier posing q queries to the proof, the amortized query complexity is defined by q/ log 2 (c/s) if the proof is coded in binary. It is a measure of the average "efficiency" of the queries in the following sense: An ideal query should preserve the completeness and halve the soundness. If this were the case for all queries, the amortized query complexity would be exactly one...