Results 1  10
of
16
Optical solution for bounded NPcomplete problems
 Appl. Opt
, 2007
"... We present a new optical method for solving bounded (inputlengthrestricted) NPcomplete combinatorial problems. We have chosen to demonstrate the method with an NPcomplete problem called the traveling salesman problem (TSP). The power of optics in this method is realized by using a fast matrix–ve ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We present a new optical method for solving bounded (inputlengthrestricted) NPcomplete combinatorial problems. We have chosen to demonstrate the method with an NPcomplete problem called the traveling salesman problem (TSP). The power of optics in this method is realized by using a fast matrix–vector multiplication between a binary matrix, representing all feasible TSP tours, and a grayscale vector, representing the weights among the TSP cities. The multiplication is performed optically by using an optical correlator. To synthesize the initial binary matrix representing all feasible tours, an efficient algorithm is provided. Simulations and experimental results prove the validity of the new
Distinguishing SAT from polynomialsize circuits through blackbox queries
 In Proceedings of the 21th Annual IEEE Conference on Computational Complexity
, 2006
"... We may believe SAT does not have small Boolean circuits. But is it possible that some language with small circuits looks indistiguishable from SAT to every polynomialtime bounded adversary? We rule out this possibility. More precisely, assuming SAT does not have small circuits, we show that for ever ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We may believe SAT does not have small Boolean circuits. But is it possible that some language with small circuits looks indistiguishable from SAT to every polynomialtime bounded adversary? We rule out this possibility. More precisely, assuming SAT does not have small circuits, we show that for every language ¡ with small circuits, there exists a probabilistic polynomialtime algorithm that makes blackbox queries to ¡, and produces, for a given input length, a Boolean formula on which ¡
WorstCase to AverageCase Reductions Revisited
"... Abstract. A fundamental goal of computational complexity (and foundations of cryptography) is to find a polynomialtime samplable distribution (e.g., the uniform distribution) and a language in NTIME(f(n)) for some polynomial function f, such that the language is hard on the average with respect to ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. A fundamental goal of computational complexity (and foundations of cryptography) is to find a polynomialtime samplable distribution (e.g., the uniform distribution) and a language in NTIME(f(n)) for some polynomial function f, such that the language is hard on the average with respect to this distribution, given that NP is worstcase hard (i.e. NP ̸ = P, or NP ̸ ⊆ BPP). Currently, no such result is known even if we relax the language to be in nondeterministic subexponential time. There has been a long line of research trying to explain our failure in proving such worstcase/averagecase connections [FF93,Vio03,BT03,AGGM06]. The bottom line of this research is essentially that (under plausible assumptions) nonadaptive Turing reductions cannot prove such results. In this paper we revisit the problem. Our first observation is that the above mentioned negative arguments extend to a nonstandard notion of averagecase complexity, in which the distribution on the inputs with respect to which we measure the averagecase complexity of the language, is only samplable in superpolynomial time. The significance of this result stems from the fact that in this nonstandard setting, [GSTS05] did show a worstcase/averagecase connection. In other words, their techniques give a way to bypass the impossibility arguments. By taking a closer look at the proof of [GSTS05], we discover that the worstcase/averagecase connection is proven by a reduction that ”almost ” falls under the category ruled out by the negative result. This gives rise to an intriguing new notion of (almost blackbox) reductions. After extending the negative results to the nonstandard averagecase setting of [GSTS05], we ask whether their positive result can be extended to the standard setting, to prove some new worstcase/averagecase connections. While we can not do that unconditionally, we are able to show that under a mild derandomization assumption, the worstcase hardness of NP implies the averagecase hardness of NTIME(f(n)) (under the uniform distribution) where f is computable in quasipolynomial time. 1
Nonuniform hardness for np via blackbox adversaries
 In Proceedings of the 21th Annual IEEE Conference on Computational Complexity
, 2006
"... We may believe SAT does not have small Boolean circuits. But is it possible that some language with small circuits looks indistiguishable from SAT to every polynomialtime bounded adversary? We rule out this possibility. More precisely, assuming SAT does not have small circuits, we show that for eve ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We may believe SAT does not have small Boolean circuits. But is it possible that some language with small circuits looks indistiguishable from SAT to every polynomialtime bounded adversary? We rule out this possibility. More precisely, assuming SAT does not have small circuits, we show that for every language with small circuits, there exists a probabilistic polynomialtime algorithm that makes blackbox queries to, and produces, for a given input length, a Boolean formula on which differs from SAT. A key step for obtaining this result is a new proof of the main result by Gutfreund, Shaltiel, and TaShma reducing averagecase hardness to worstcase hardness via uniform adversaries that know the algorithm they fool. The new adversary we construct has the feature of being blackbox on the algorithm it fools, so it makes sense in the nonuniform setting as well. Our proof makes use of a refined analysis of the learning algorithm of Bshouty et al..
Lower bounds on the query complexity of nonuniform and adaptive reductions showing hardness amplification
, 2012
"... Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2 ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2+ϵ fraction of inputs. All hardness amplification results in the literature suffer from “size loss ” meaning that s ′ ≤ ϵ · s. In this paper we show that proofs using “nonuniform reductions ” must suffer from such size loss. To the best of our knowledge, all proofs in the literature are by nonuniform reductions. Our result is the first lower bound that applies to nonuniform reductions that are adaptive. A reduction is an oracle circuit R (·) such that when given oracle access to any function D that computes Amp(f) correctly on a 1/2 + ϵ fraction of inputs, R D computes f correctly on a 1 − δ fraction of inputs. A nonuniform reduction is allowed to also receive a short advice string that may depend on both f and D in an arbitrary way. The well known connection between hardness amplification and listdecodable errorcorrecting codes implies that reductions showing hardness amplification cannot be uniform for δ, ϵ < 1/4. A reduction is nonadaptive if it makes nonadaptive queries to its oracle. Shaltiel and Viola (SICOMP 2010) showed lower bounds on the number of queries made by nonuniform
Optical processor for solving the traveling salesman problem
 Proc. of SPIE, Optical Information Systems IV, August 2006
, 2006
"... This paper introduces an optical solution to (boundedlength input instances of) an NPcomplete problem called the traveling salesman problem using a pure optical system. The solution is based on the multiplication of a binarymatrix, representing all feasible routes, by a weightvector, representin ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper introduces an optical solution to (boundedlength input instances of) an NPcomplete problem called the traveling salesman problem using a pure optical system. The solution is based on the multiplication of a binarymatrix, representing all feasible routes, by a weightvector, representing the weights of the problem. The multiplication of the binarymatrix by the weightvector can be implemented by any optical vectormatrix multiplier. In this paper, we show that this multiplication can be obtained by an optical correlator. In order to synthesize the binarymatrix, a unique iterative algorithm is presented. This algorithm synthesizes an Nnode binarymatrix using rather small number of vector duplications from the (N−1)node binarymatrix. We also show that the algorithm itself can be implemented optically and thus we ensure the entire optical solution to the problem. Simulation and experimental results prove the validity of the optical method.
WorstCase Vs. Algorithmic AverageCase Complexity in the PolynomialTime Hierarchy
 In Proceedings of the 10th International Workshop on Randomization and Computation, RANDOM 2006
, 2006
"... We show that for every integer k> 1, if Σk, the k’th level of the polynomialtime hierarchy, is worstcase hard for probabilistic polynomialtime algorithms, then there is a language L ∈ Σk such that for every probabilistic polynomialtime algorithm that attempts to decide it, there is a samplabl ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We show that for every integer k> 1, if Σk, the k’th level of the polynomialtime hierarchy, is worstcase hard for probabilistic polynomialtime algorithms, then there is a language L ∈ Σk such that for every probabilistic polynomialtime algorithm that attempts to decide it, there is a samplable distribution over the instances of L, on which the algorithm errs with probability at least 1/2−1/poly(n) (where the probability is over the choice of instances and the randomness of the algorithm). In other words, on this distribution the algorithm essentially does not perform any better than the algorithm that simply decides according to the outcome of an unbiased coin toss.
Some results on averagecase hardness within the polynomial hierarchy
 In Proceedings of the 26th Conference on Foundations of Software Technology and Theoretical Computer Science
, 2006
"... Abstract. We prove several results about the averagecase complexity of problems in the Polynomial Hierarchy (PH). We give a connection among averagecase, worstcase, and nonuniform complexity of optimization problems. Specifically, we show that if P NP is hard in the worstcase then it is either ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We prove several results about the averagecase complexity of problems in the Polynomial Hierarchy (PH). We give a connection among averagecase, worstcase, and nonuniform complexity of optimization problems. Specifically, we show that if P NP is hard in the worstcase then it is either hard on the average (in the sense of Levin) or it is nonuniformly hard (i.e. it does not have small circuits). Recently, Gutfreund, Shaltiel and TaShma (IEEE Conference on Computational Complexity, 2005) showed an interesting worstcase to averagecase connection for languages in NP, under a notion of averagecase hardness defined using uniform adversaries. We show that extending their connection to hardness against quasipolynomial time would imply that NEXP doesn’t have polynomialsize circuits. Finally we prove an unconditional averagecase hardness result. We show that for each k, there is an explicit language in P Σ2 which is hard on average for circuits of size n k. 1
Relativized Worlds Without WorstCase to AverageCase Reductions for NP
, 2010
"... We prove that relative to an oracle, there is no worstcase to averagecase reduction for NP. We also handle classes that are somewhat larger than NP, as well as worstcase to errorlessaveragecase reductions. In fact, we prove that relative to an oracle, there is no worstcase. We also handle reduc ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We prove that relative to an oracle, there is no worstcase to averagecase reduction for NP. We also handle classes that are somewhat larger than NP, as well as worstcase to errorlessaveragecase reductions. In fact, we prove that relative to an oracle, there is no worstcase. We also handle reductions from NP to the polynomialtime hierarchy and beyond, under restrictions on the number of queries the reductions can make. to errorlessaveragecase reduction from NP to BPP NP 1
An improving on Gutfreund, Shaltiel, and TaShma’s paper “If NP Languages are Hard on the WorstCase, Then it is Easy to Find Their
"... Abstract. Assume that NP ⊂ BPP. Gutfreund, Shaltiel, and TaShma in [Computational Complexity 16(4):412441 (2007)] have proved that for every randomized polynomial time decision algorithm D for SAT there is a polynomial time samplable distribution such that D errs with probability at least 1/6−ε o ..."
Abstract
 Add to MetaCart
Abstract. Assume that NP ⊂ BPP. Gutfreund, Shaltiel, and TaShma in [Computational Complexity 16(4):412441 (2007)] have proved that for every randomized polynomial time decision algorithm D for SAT there is a polynomial time samplable distribution such that D errs with probability at least 1/6−ε on a random formula chosen with respect to that distribution. A challenging problem is to increase the error probability to the maximal possible 1/2−ε (the random guessing has success probability 1/2). In this paper, we make a small step towards this goal: we show how to increase the error probability to 1/3−ε. 1