Results 1  10
of
54
Computational Complexity and Feasibility of Data Processing and Interval Computations, With Extension to Cases When We Have Partial Information about Probabilities
, 2003
"... In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easiertomeasure quantities x 1 ; : : : ; xn which are related to y by a known relation y = f(x 1 ; : : : ; xn ). Measurements a ..."
Abstract

Cited by 216 (128 self)
 Add to MetaCart
In many reallife situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easiertomeasure quantities x 1 ; : : : ; xn which are related to y by a known relation y = f(x 1 ; : : : ; xn ). Measurements are never 100% accurate; hence, the measured values e x i are different from x i , and the resulting estimate e y = f(ex 1 ; : : : ; e xn ) is different from the desired value y = f(x 1 ; : : : ; xn ). How different? Traditional engineering to error estimation in data processing assumes that we know the probabilities of different measurement error \Deltax i = e x i \Gamma x i . In many practical situations, we only know the upper bound \Delta i for this error; hence, after the measurement, the only information that we have about x i is that it belongs to the interval x i = [ex i \Gamma \Delta i ; e x i + \Delta i ]. In this case, it is important to find the range y of all possible values of y = f(x 1 ; : : : ; xn ) when x i 2 x i . We start the paper with a brief overview of the computational complexity of the corresponding interval computation problems.
BPP has Subexponential Time Simulations unless EXPTIME has Publishable Proofs (Extended Abstract)
, 1993
"... ) L'aszl'o Babai Noam Nisan y Lance Fortnow z Avi Wigderson University of Chicago Hebrew University Abstract We show that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time ffl collapses to the second level of the polynomialtime ..."
Abstract

Cited by 114 (9 self)
 Add to MetaCart
) L'aszl'o Babai Noam Nisan y Lance Fortnow z Avi Wigderson University of Chicago Hebrew University Abstract We show that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time ffl collapses to the second level of the polynomialtime hierarchy, ffl has polynomialsize circuits and ffl has publishable proofs (EXPTIME=MA). We also show that BPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, we show BPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages in MA n P . The proofs are based on the recent characterization of the power of multiprover interactive protocols and on random selfreducibility via low degree polynomials. They exhibit an interplay between Boolean circuit simulation, interactive proofs and classical complexity classes. An important feature of this proof is that it does not ...
The Expressive Power of Voting Polynomials
 Combinatorica
, 1993
"... We consider the problem of approximating a Boolean function f : f0; 1g n ! f0; 1g by the sign of an integer polynomial p of degree k. For us, a polynomial p(x) predicts the value of f(x) if, whenever p(x) 0, f(x) = 1, and whenever p(x) ! 0, f(x) = 0. A lowdegree polynomial p is a good approxima ..."
Abstract

Cited by 102 (9 self)
 Add to MetaCart
We consider the problem of approximating a Boolean function f : f0; 1g n ! f0; 1g by the sign of an integer polynomial p of degree k. For us, a polynomial p(x) predicts the value of f(x) if, whenever p(x) 0, f(x) = 1, and whenever p(x) ! 0, f(x) = 0. A lowdegree polynomial p is a good approximator for f if it predicts f at almost all points. Given a positive integer k, and a Boolean function f , we ask, "how good is the best degree k approximation to f?" We introduce a new lower bound technique which applies to any Boolean function. We show that the lower bound technique yields tight bounds in the case f is parity. Minsky and Papert [10] proved that a perceptron can not compute parity; our bounds indicate exactly how well Yale University, Dept. of Computer Science, P.O. Box 208285, New Haven CT 065208285. y Email: aspnesjames@cs.yale.edu. z Email: beigelrichard@cs.yale.edu. Supported in part by NSF grants CCR8808949 and CCR8958528. x CarnegieMellon University, Schoo...
The polynomial method in circuit complexity
 In Proceedings of the eighth IEEE Structure in Complexity Theory Conference
, 1993
"... ..."
(Show Context)
Complexity Classes Defined By Counting Quantifiers
, 1991
"... We study the polynomial time counting hierarchy, a hierarchy of complexity classes related to the notion of counting. We investigate some of their structural properties, settling many open questions dealing with oracle characterizations, closure under boolean operations, and relations with other com ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
We study the polynomial time counting hierarchy, a hierarchy of complexity classes related to the notion of counting. We investigate some of their structural properties, settling many open questions dealing with oracle characterizations, closure under boolean operations, and relations with other complexity classes. We develop a new combinatorial technique to obtain relativized separations for some of the studied classes, which imply absolute separations for some logarithmic time bounded complexity classes.
The Complexity and Distribution of Hard Problems
 SIAM JOURNAL ON COMPUTING
, 1993
"... Measuretheoretic aspects of the P m reducibility structure of the exponential time complexity classes E=DTIME(2 linear ) and E 2 = DTIME(2 polynomial ) are investigated. Particular attention is given to the complexity (measured by the size of complexity cores) and distribution (abundance in ..."
Abstract

Cited by 48 (18 self)
 Add to MetaCart
Measuretheoretic aspects of the P m reducibility structure of the exponential time complexity classes E=DTIME(2 linear ) and E 2 = DTIME(2 polynomial ) are investigated. Particular attention is given to the complexity (measured by the size of complexity cores) and distribution (abundance in the sense of measure) of languages that are P m  hard for E and other complexity classes. Tight upper and lower bounds on the size of complexity cores of hard languages are derived. The upper bound says that the P m hard languages for E are unusually simple, in the sense that they have smaller complexity cores than most languages in E. It follows that the P m complete languages for E form a measure 0 subset of E (and similarly in E 2 ). This latter fact is seen to be a special case of a more general theorem, namely, that every P m degree (e.g., the degree of all P m complete languages for NP) has measure 0 in E and in E 2 .
An oracle builder’s toolkit
, 2002
"... We show how to use various notions of genericity as tools in oracle creation. In particular, 1. we give an abstract definition of genericity that encompasses a large collection of different generic notions; 2. we consider a new complexity class AWPP, which contains BQP (quantum polynomial time), and ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
(Show Context)
We show how to use various notions of genericity as tools in oracle creation. In particular, 1. we give an abstract definition of genericity that encompasses a large collection of different generic notions; 2. we consider a new complexity class AWPP, which contains BQP (quantum polynomial time), and infer several strong collapses relative to SPgenerics; 3. we show that under additional assumptions these collapses also occur relative to Cohen generics; 4. we show that relative to SPgenerics, ULIN ∩ coULIN ̸ ⊆ DTIME(n k) for any k, where ULIN is unambiguous linear time, despite the fact that UP ∪ (NP ∩ coNP) ⊆ P relative to these generics; 5. we show that there is an oracle relative to which NP/1∩coNP/1 ̸ ⊆ (NP∩coNP)/poly; and 6. we use a specialized notion of genericity to create an oracle relative to which NP BPP ̸ ⊇ MA.
The Isomorphism Conjecture Fails Relative to a Random Oracle
 J. ACM
, 1996
"... Berman and Hartmanis [BH77] conjectured that there is a polynomialtime computable isomorphism between any two languages complete for NP with respect to polynomialtime computable manyone (Karp) reductions. Joseph and Young [JY85] gave a structural definition of a class of NPcomplete setsthe kc ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
(Show Context)
Berman and Hartmanis [BH77] conjectured that there is a polynomialtime computable isomorphism between any two languages complete for NP with respect to polynomialtime computable manyone (Karp) reductions. Joseph and Young [JY85] gave a structural definition of a class of NPcomplete setsthe kcreative setsand defined a class of sets (the K k f 's) that are necessarily kcreative. They went on to conjecture that certain of these K k f 's are not isomorphic to the standard NPcomplete sets. Clearly, the BermanHartmanis and JosephYoung conjectures cannot both be correct. We introduce a family of strong oneway functions, the scrambling functions. If f is a scrambling function, then K k f is not isomorphic to the standard NPcomplete sets, as Joseph and Young conjectured, and the BermanHartmanis conjecture fails. Indeed, if scrambling functions exist, then the isomorphism also fails at higher complexity classes such as EXP and NEXP. As evidence for the existence of scramb...
Relativizable And Nonrelativizable Theorems In The Polynomial Theory Of Algorithms
 In Russian
, 1993
"... . Starting with the paper of Baker, Gill and Solovay [BGS 75] in complexity theory, many results have been proved which separate certain relativized complexity classes or show that they have no complete language. All results of this kind were, in fact, based on lower bounds for boolean decision tree ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
. Starting with the paper of Baker, Gill and Solovay [BGS 75] in complexity theory, many results have been proved which separate certain relativized complexity classes or show that they have no complete language. All results of this kind were, in fact, based on lower bounds for boolean decision trees of a certain type or for machines with polylogarithmic restrictions on time. The following question arises: Are these methods of proving "relativized" results universal? In the first part of the present paper we propose a general framework in which assertions of universality of this kind may be formulated and proved as convenient criteria. Using these criteria we obtain, as easy consequences of the known results on boolean decision trees, some new "relativized" results and new proofs of some known results. In the second part of the present paper we apply these general criteria to many particular cases. For example, for many of the complexity classes studied in the literature all relativiza...